modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-16 00:42:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
522 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-16 00:42:16
card
stringlengths
11
1.01M
nicsur/ppo-Huggy
nicsur
2023-11-25T21:02:48Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-25T21:02:43Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nicsur/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
kjan/Llama-2-7b-chat-finetune
kjan
2023-11-25T20:58:52Z
4
0
peft
[ "peft", "region:us" ]
null
2023-11-23T20:10:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
tmklein/path-to-save-model
tmklein
2023-11-25T20:56:37Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-11-25T20:25:03Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - tmklein/path-to-save-model These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
pavpanda/charizard-is-blastoise-dup
pavpanda
2023-11-25T20:55:40Z
1
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-25T20:45:20Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### charizard_is_blastoise_dup Dreambooth model trained by pavpanda with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
huggingsaurusRex/bert-base-uncased-for-mountain-ner
huggingsaurusRex
2023-11-25T20:55:16Z
8
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "en", "dataset:DFKI-SLT/few-nerd", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-11-25T17:42:54Z
--- datasets: - DFKI-SLT/few-nerd language: - en metrics: - f1=0.87 - precision - recall --- # HuggingsaurusRex/bert-base-uncased-for-mountain-ner ## Purpose Detect mountain names in text using token classification. ## Usage ```python from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline # Load model and tokenizer model = AutoModelForTokenClassification.from_pretrained('huggingsaurusRex/bert-base-uncased-for-mountain-ner') tokenizer = AutoTokenizer.from_pretrained('huggingsaurusRex/bert-base-uncased-for-mountain-ner') # Create NER pipeline ner = pipeline('ner', model=model, tokenizer=tokenizer) # Perform inference res = ner("I spent days climbing the Mount Everest.") print(res) ``` ## Architecture The model is a BERT-based token classification model fine-tuned on the Few-NERD dataset. ## Results - F1-Score: 0.87 - Precision: 0.84 - Recall: 0.91 ## Direct Link [HuggingsaurusRex/bert-base-uncased-for-mountain-ner](https://huggingface.co/huggingsaurusRex/bert-base-uncased-for-mountain-ner)
hkivancoral/hushem_1x_beit_base_rms_00001_fold4
hkivancoral
2023-11-25T20:54:46Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T20:44:50Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_rms_00001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9523809523809523 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_rms_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2419 - Accuracy: 0.9524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3507 | 0.4286 | | 1.4746 | 2.0 | 12 | 1.0121 | 0.5238 | | 1.4746 | 3.0 | 18 | 0.4084 | 0.9048 | | 0.4975 | 4.0 | 24 | 0.8867 | 0.6905 | | 0.135 | 5.0 | 30 | 0.3643 | 0.9048 | | 0.135 | 6.0 | 36 | 0.2799 | 0.9048 | | 0.0217 | 7.0 | 42 | 0.2749 | 0.9286 | | 0.0217 | 8.0 | 48 | 0.1461 | 0.9524 | | 0.0073 | 9.0 | 54 | 0.2904 | 0.9286 | | 0.003 | 10.0 | 60 | 0.2142 | 0.9762 | | 0.003 | 11.0 | 66 | 0.2416 | 0.9048 | | 0.0024 | 12.0 | 72 | 0.2155 | 0.9286 | | 0.0024 | 13.0 | 78 | 0.1970 | 0.9524 | | 0.0018 | 14.0 | 84 | 0.2474 | 0.9286 | | 0.002 | 15.0 | 90 | 0.2996 | 0.9048 | | 0.002 | 16.0 | 96 | 0.2243 | 0.9524 | | 0.0011 | 17.0 | 102 | 0.2323 | 0.9524 | | 0.0011 | 18.0 | 108 | 0.2007 | 0.9286 | | 0.0019 | 19.0 | 114 | 0.2031 | 0.9286 | | 0.0015 | 20.0 | 120 | 0.2492 | 0.9286 | | 0.0015 | 21.0 | 126 | 0.2398 | 0.9286 | | 0.0022 | 22.0 | 132 | 0.2207 | 0.9286 | | 0.0022 | 23.0 | 138 | 0.2104 | 0.9286 | | 0.001 | 24.0 | 144 | 0.2272 | 0.9524 | | 0.0009 | 25.0 | 150 | 0.2107 | 0.9286 | | 0.0009 | 26.0 | 156 | 0.2183 | 0.9524 | | 0.0009 | 27.0 | 162 | 0.2098 | 0.9524 | | 0.0009 | 28.0 | 168 | 0.2285 | 0.9524 | | 0.0007 | 29.0 | 174 | 0.2209 | 0.9524 | | 0.0007 | 30.0 | 180 | 0.2991 | 0.9524 | | 0.0007 | 31.0 | 186 | 0.2929 | 0.9286 | | 0.0008 | 32.0 | 192 | 0.2866 | 0.9286 | | 0.0008 | 33.0 | 198 | 0.2902 | 0.9524 | | 0.0007 | 34.0 | 204 | 0.2876 | 0.9524 | | 0.0041 | 35.0 | 210 | 0.2290 | 0.9524 | | 0.0041 | 36.0 | 216 | 0.2314 | 0.9524 | | 0.0005 | 37.0 | 222 | 0.2320 | 0.9524 | | 0.0005 | 38.0 | 228 | 0.2342 | 0.9524 | | 0.0005 | 39.0 | 234 | 0.2418 | 0.9524 | | 0.0012 | 40.0 | 240 | 0.2419 | 0.9524 | | 0.0012 | 41.0 | 246 | 0.2420 | 0.9524 | | 0.0006 | 42.0 | 252 | 0.2419 | 0.9524 | | 0.0006 | 43.0 | 258 | 0.2419 | 0.9524 | | 0.0007 | 44.0 | 264 | 0.2419 | 0.9524 | | 0.0041 | 45.0 | 270 | 0.2419 | 0.9524 | | 0.0041 | 46.0 | 276 | 0.2419 | 0.9524 | | 0.0014 | 47.0 | 282 | 0.2419 | 0.9524 | | 0.0014 | 48.0 | 288 | 0.2419 | 0.9524 | | 0.0023 | 49.0 | 294 | 0.2419 | 0.9524 | | 0.0004 | 50.0 | 300 | 0.2419 | 0.9524 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
hkivancoral/hushem_1x_beit_base_rms_00001_fold3
hkivancoral
2023-11-25T20:44:13Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T20:34:32Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_rms_00001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8372093023255814 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_rms_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5862 - Accuracy: 0.8372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3652 | 0.2558 | | 1.4655 | 2.0 | 12 | 0.9320 | 0.6512 | | 1.4655 | 3.0 | 18 | 0.5733 | 0.7907 | | 0.6613 | 4.0 | 24 | 0.3842 | 0.8605 | | 0.1719 | 5.0 | 30 | 0.4268 | 0.8605 | | 0.1719 | 6.0 | 36 | 0.3122 | 0.8837 | | 0.0362 | 7.0 | 42 | 0.5635 | 0.7907 | | 0.0362 | 8.0 | 48 | 0.2839 | 0.8837 | | 0.0103 | 9.0 | 54 | 0.3515 | 0.9070 | | 0.0048 | 10.0 | 60 | 0.4717 | 0.8837 | | 0.0048 | 11.0 | 66 | 0.4775 | 0.8372 | | 0.0038 | 12.0 | 72 | 0.5321 | 0.7907 | | 0.0038 | 13.0 | 78 | 0.4659 | 0.8372 | | 0.0022 | 14.0 | 84 | 0.5318 | 0.8140 | | 0.0017 | 15.0 | 90 | 0.5328 | 0.8605 | | 0.0017 | 16.0 | 96 | 0.4991 | 0.8372 | | 0.0025 | 17.0 | 102 | 0.5203 | 0.8372 | | 0.0025 | 18.0 | 108 | 0.5439 | 0.8372 | | 0.0011 | 19.0 | 114 | 0.5049 | 0.8372 | | 0.0014 | 20.0 | 120 | 0.5023 | 0.8372 | | 0.0014 | 21.0 | 126 | 0.5748 | 0.8372 | | 0.0013 | 22.0 | 132 | 0.5341 | 0.8372 | | 0.0013 | 23.0 | 138 | 0.4866 | 0.8372 | | 0.0011 | 24.0 | 144 | 0.5270 | 0.8372 | | 0.0012 | 25.0 | 150 | 0.5889 | 0.8372 | | 0.0012 | 26.0 | 156 | 0.6180 | 0.8372 | | 0.0013 | 27.0 | 162 | 0.6227 | 0.8372 | | 0.0013 | 28.0 | 168 | 0.6125 | 0.8372 | | 0.0007 | 29.0 | 174 | 0.5708 | 0.8605 | | 0.0004 | 30.0 | 180 | 0.5729 | 0.8372 | | 0.0004 | 31.0 | 186 | 0.5789 | 0.8372 | | 0.001 | 32.0 | 192 | 0.5842 | 0.8140 | | 0.001 | 33.0 | 198 | 0.5989 | 0.8372 | | 0.0008 | 34.0 | 204 | 0.5775 | 0.8140 | | 0.0013 | 35.0 | 210 | 0.5738 | 0.8372 | | 0.0013 | 36.0 | 216 | 0.5742 | 0.8140 | | 0.0006 | 37.0 | 222 | 0.6172 | 0.8140 | | 0.0006 | 38.0 | 228 | 0.5958 | 0.8140 | | 0.0026 | 39.0 | 234 | 0.5884 | 0.8140 | | 0.0006 | 40.0 | 240 | 0.5885 | 0.8140 | | 0.0006 | 41.0 | 246 | 0.5863 | 0.8372 | | 0.0008 | 42.0 | 252 | 0.5862 | 0.8372 | | 0.0008 | 43.0 | 258 | 0.5862 | 0.8372 | | 0.0006 | 44.0 | 264 | 0.5862 | 0.8372 | | 0.0004 | 45.0 | 270 | 0.5862 | 0.8372 | | 0.0004 | 46.0 | 276 | 0.5862 | 0.8372 | | 0.0006 | 47.0 | 282 | 0.5862 | 0.8372 | | 0.0006 | 48.0 | 288 | 0.5862 | 0.8372 | | 0.0005 | 49.0 | 294 | 0.5862 | 0.8372 | | 0.0004 | 50.0 | 300 | 0.5862 | 0.8372 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
crowbarmassage/a2c-PandaReachDense-v3
crowbarmassage
2023-11-25T20:38:50Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T20:33:52Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.19 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Kakashi/rl_training
Kakashi
2023-11-25T20:37:34Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T20:35:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.19 +/- 21.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
chienpham/haha
chienpham
2023-11-25T20:34:04Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-11-25T20:33:36Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # haha This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('haha') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('haha') model = AutoModel.from_pretrained('haha') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=haha) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 313 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hkivancoral/hushem_1x_beit_base_rms_00001_fold2
hkivancoral
2023-11-25T20:33:53Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T20:23:53Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_rms_00001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8444444444444444 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_rms_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9358 - Accuracy: 0.8444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2856 | 0.4889 | | 1.4398 | 2.0 | 12 | 0.9696 | 0.6222 | | 1.4398 | 3.0 | 18 | 0.7405 | 0.7111 | | 0.463 | 4.0 | 24 | 0.8561 | 0.7333 | | 0.1243 | 5.0 | 30 | 0.6572 | 0.8222 | | 0.1243 | 6.0 | 36 | 0.6983 | 0.8444 | | 0.0205 | 7.0 | 42 | 0.7294 | 0.8222 | | 0.0205 | 8.0 | 48 | 0.6504 | 0.8 | | 0.0064 | 9.0 | 54 | 0.6828 | 0.8222 | | 0.0142 | 10.0 | 60 | 0.6539 | 0.8222 | | 0.0142 | 11.0 | 66 | 0.7615 | 0.8444 | | 0.0032 | 12.0 | 72 | 0.8146 | 0.8444 | | 0.0032 | 13.0 | 78 | 0.8154 | 0.8444 | | 0.0019 | 14.0 | 84 | 0.7947 | 0.8444 | | 0.0028 | 15.0 | 90 | 0.7939 | 0.8444 | | 0.0028 | 16.0 | 96 | 0.8240 | 0.8444 | | 0.0013 | 17.0 | 102 | 0.8242 | 0.8222 | | 0.0013 | 18.0 | 108 | 0.8443 | 0.8444 | | 0.0014 | 19.0 | 114 | 0.8393 | 0.8444 | | 0.0012 | 20.0 | 120 | 0.9165 | 0.8222 | | 0.0012 | 21.0 | 126 | 0.8985 | 0.8222 | | 0.0008 | 22.0 | 132 | 0.9053 | 0.8222 | | 0.0008 | 23.0 | 138 | 0.9182 | 0.8222 | | 0.0007 | 24.0 | 144 | 0.9131 | 0.8222 | | 0.0007 | 25.0 | 150 | 0.9205 | 0.8222 | | 0.0007 | 26.0 | 156 | 0.9165 | 0.8222 | | 0.0004 | 27.0 | 162 | 0.9119 | 0.8222 | | 0.0004 | 28.0 | 168 | 0.9185 | 0.8222 | | 0.0005 | 29.0 | 174 | 0.9203 | 0.8222 | | 0.0004 | 30.0 | 180 | 0.9232 | 0.8222 | | 0.0004 | 31.0 | 186 | 0.9207 | 0.8444 | | 0.0009 | 32.0 | 192 | 0.9256 | 0.8444 | | 0.0009 | 33.0 | 198 | 0.9230 | 0.8444 | | 0.0082 | 34.0 | 204 | 0.9200 | 0.8444 | | 0.0007 | 35.0 | 210 | 0.9385 | 0.8444 | | 0.0007 | 36.0 | 216 | 0.9350 | 0.8444 | | 0.0005 | 37.0 | 222 | 0.9367 | 0.8444 | | 0.0005 | 38.0 | 228 | 0.9290 | 0.8444 | | 0.0044 | 39.0 | 234 | 0.9294 | 0.8444 | | 0.0005 | 40.0 | 240 | 0.9330 | 0.8444 | | 0.0005 | 41.0 | 246 | 0.9359 | 0.8444 | | 0.0006 | 42.0 | 252 | 0.9358 | 0.8444 | | 0.0006 | 43.0 | 258 | 0.9358 | 0.8444 | | 0.0005 | 44.0 | 264 | 0.9358 | 0.8444 | | 0.0007 | 45.0 | 270 | 0.9358 | 0.8444 | | 0.0007 | 46.0 | 276 | 0.9358 | 0.8444 | | 0.0007 | 47.0 | 282 | 0.9358 | 0.8444 | | 0.0007 | 48.0 | 288 | 0.9358 | 0.8444 | | 0.0006 | 49.0 | 294 | 0.9358 | 0.8444 | | 0.0004 | 50.0 | 300 | 0.9358 | 0.8444 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Benlitzen43/roberta-large-finetuned-cosmos
Benlitzen43
2023-11-25T20:26:15Z
11
0
transformers
[ "transformers", "pytorch", "roberta", "multiple-choice", "generated_from_trainer", "dataset:cosmos_qa", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2023-11-22T20:52:22Z
--- license: mit base_model: roberta-large tags: - generated_from_trainer datasets: - cosmos_qa metrics: - accuracy model-index: - name: roberta-large-finetuned-cosmos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-finetuned-cosmos This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the cosmos_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.3863 - Accuracy: 0.2533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.39 | 1.0 | 12631 | 1.3863 | 0.2586 | | 1.3956 | 2.0 | 25262 | 1.3863 | 0.2563 | | 1.3902 | 3.0 | 37893 | 1.3863 | 0.2533 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t300_e5
FounderOfHuggingface
2023-11-25T20:14:14Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2023-11-25T20:14:12Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2
Looyyd/q-FrozenLake-v1-4x4-noSlippery
Looyyd
2023-11-25T20:03:19Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T20:03:16Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Looyyd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hkivancoral/hushem_1x_beit_base_sgd_00001_fold4
hkivancoral
2023-11-25T20:00:53Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T19:52:29Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_sgd_00001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.2857142857142857 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_sgd_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4953 - Accuracy: 0.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5027 | 0.3095 | | 1.6067 | 2.0 | 12 | 1.5024 | 0.2857 | | 1.6067 | 3.0 | 18 | 1.5020 | 0.2857 | | 1.5707 | 4.0 | 24 | 1.5016 | 0.2857 | | 1.5808 | 5.0 | 30 | 1.5013 | 0.2857 | | 1.5808 | 6.0 | 36 | 1.5009 | 0.2857 | | 1.5893 | 7.0 | 42 | 1.5006 | 0.2857 | | 1.5893 | 8.0 | 48 | 1.5003 | 0.2857 | | 1.5944 | 9.0 | 54 | 1.5000 | 0.2857 | | 1.5821 | 10.0 | 60 | 1.4997 | 0.2857 | | 1.5821 | 11.0 | 66 | 1.4994 | 0.2857 | | 1.5703 | 12.0 | 72 | 1.4991 | 0.2857 | | 1.5703 | 13.0 | 78 | 1.4988 | 0.2857 | | 1.5654 | 14.0 | 84 | 1.4986 | 0.2857 | | 1.5848 | 15.0 | 90 | 1.4983 | 0.2857 | | 1.5848 | 16.0 | 96 | 1.4981 | 0.2857 | | 1.606 | 17.0 | 102 | 1.4978 | 0.2857 | | 1.606 | 18.0 | 108 | 1.4976 | 0.2857 | | 1.6306 | 19.0 | 114 | 1.4974 | 0.2857 | | 1.5966 | 20.0 | 120 | 1.4972 | 0.2857 | | 1.5966 | 21.0 | 126 | 1.4970 | 0.2857 | | 1.5946 | 22.0 | 132 | 1.4969 | 0.2857 | | 1.5946 | 23.0 | 138 | 1.4967 | 0.2857 | | 1.5656 | 24.0 | 144 | 1.4966 | 0.2857 | | 1.5572 | 25.0 | 150 | 1.4964 | 0.2857 | | 1.5572 | 26.0 | 156 | 1.4963 | 0.2857 | | 1.5856 | 27.0 | 162 | 1.4961 | 0.2857 | | 1.5856 | 28.0 | 168 | 1.4960 | 0.2857 | | 1.612 | 29.0 | 174 | 1.4959 | 0.2857 | | 1.581 | 30.0 | 180 | 1.4958 | 0.2857 | | 1.581 | 31.0 | 186 | 1.4957 | 0.2857 | | 1.566 | 32.0 | 192 | 1.4956 | 0.2857 | | 1.566 | 33.0 | 198 | 1.4956 | 0.2857 | | 1.5925 | 34.0 | 204 | 1.4955 | 0.2857 | | 1.5991 | 35.0 | 210 | 1.4954 | 0.2857 | | 1.5991 | 36.0 | 216 | 1.4954 | 0.2857 | | 1.5811 | 37.0 | 222 | 1.4954 | 0.2857 | | 1.5811 | 38.0 | 228 | 1.4953 | 0.2857 | | 1.5945 | 39.0 | 234 | 1.4953 | 0.2857 | | 1.5831 | 40.0 | 240 | 1.4953 | 0.2857 | | 1.5831 | 41.0 | 246 | 1.4953 | 0.2857 | | 1.5802 | 42.0 | 252 | 1.4953 | 0.2857 | | 1.5802 | 43.0 | 258 | 1.4953 | 0.2857 | | 1.6388 | 44.0 | 264 | 1.4953 | 0.2857 | | 1.5513 | 45.0 | 270 | 1.4953 | 0.2857 | | 1.5513 | 46.0 | 276 | 1.4953 | 0.2857 | | 1.5675 | 47.0 | 282 | 1.4953 | 0.2857 | | 1.5675 | 48.0 | 288 | 1.4953 | 0.2857 | | 1.6043 | 49.0 | 294 | 1.4953 | 0.2857 | | 1.6042 | 50.0 | 300 | 1.4953 | 0.2857 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Parleatacoeur/Llama_2_7b_chat_hf_leyes_peruanas_civiles
Parleatacoeur
2023-11-25T19:57:34Z
1
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
2023-11-25T02:48:49Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
kejolong/gamechara
kejolong
2023-11-25T19:57:24Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-09T11:24:32Z
--- license: creativeml-openrail-m ---
boda/mistral-7b-story-generation-24k
boda
2023-11-25T19:52:53Z
4
1
peft
[ "peft", "safetensors", "en", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:mit", "region:us" ]
null
2023-11-24T09:11:38Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 license: mit language: - en metrics: - perplexity - bertscore --- # Model Card for Model ID Fine-tuned using QLoRA for story generation task. ### Model Description We utilize "Hierarchical Neural Story Generation" dataset and fine-tune the model to generate stories. The input to the model is structred as follows: ''' \#\#\# Instruction: Below is a story idea. Write a short story based on this context. \#\#\# Input: [story idea here] \#\#\# Response: ''' - **Developed by:** Abdelrahman ’Boda’ Sadallah, Anastasiia Demidova, Daria Kotova - **Model type:** Causal LM - **Language(s) (NLP):** English - **Finetuned from model [optional]:** mistralai/Mistral-7B-v0.1 ### Model Sources - **Repository:** https://github.com/BodaSadalla98/llm-optimized-fintuning ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model is the result of our AI project. If you intend to use it, please, refer to the repo. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> For improving stories generation, you can play parameters: temeperature, top_p/top_k, repetition_penalty, etc. ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Github for the dataset: https://github.com/kevalnagda/StoryGeneration ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics Test split of the same dataset. #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> We are using perplexity and BERTScore. ### Results Perplexity: 8.8647 BERTScore: 80.76 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
blanchon/ppo-LunarLander-v2
blanchon
2023-11-25T19:52:39Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T19:45:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: MlpPolicy results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.72 +/- 21.31 name: mean_reward verified: false --- # **MlpPolicy** Agent playing **LunarLander-v2** This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hkivancoral/hushem_1x_beit_base_sgd_00001_fold3
hkivancoral
2023-11-25T19:51:52Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T19:43:37Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_sgd_00001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.2558139534883721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_sgd_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5773 - Accuracy: 0.2558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5860 | 0.2558 | | 1.5832 | 2.0 | 12 | 1.5856 | 0.2558 | | 1.5832 | 3.0 | 18 | 1.5851 | 0.2558 | | 1.5961 | 4.0 | 24 | 1.5847 | 0.2558 | | 1.5221 | 5.0 | 30 | 1.5843 | 0.2558 | | 1.5221 | 6.0 | 36 | 1.5839 | 0.2558 | | 1.5495 | 7.0 | 42 | 1.5835 | 0.2558 | | 1.5495 | 8.0 | 48 | 1.5831 | 0.2558 | | 1.5657 | 9.0 | 54 | 1.5828 | 0.2558 | | 1.5842 | 10.0 | 60 | 1.5824 | 0.2558 | | 1.5842 | 11.0 | 66 | 1.5821 | 0.2558 | | 1.5665 | 12.0 | 72 | 1.5818 | 0.2558 | | 1.5665 | 13.0 | 78 | 1.5815 | 0.2558 | | 1.536 | 14.0 | 84 | 1.5812 | 0.2558 | | 1.572 | 15.0 | 90 | 1.5809 | 0.2558 | | 1.572 | 16.0 | 96 | 1.5807 | 0.2558 | | 1.5843 | 17.0 | 102 | 1.5804 | 0.2558 | | 1.5843 | 18.0 | 108 | 1.5802 | 0.2558 | | 1.5423 | 19.0 | 114 | 1.5799 | 0.2558 | | 1.5549 | 20.0 | 120 | 1.5797 | 0.2558 | | 1.5549 | 21.0 | 126 | 1.5794 | 0.2558 | | 1.5883 | 22.0 | 132 | 1.5792 | 0.2558 | | 1.5883 | 23.0 | 138 | 1.5791 | 0.2558 | | 1.5691 | 24.0 | 144 | 1.5789 | 0.2558 | | 1.5489 | 25.0 | 150 | 1.5787 | 0.2558 | | 1.5489 | 26.0 | 156 | 1.5785 | 0.2558 | | 1.5874 | 27.0 | 162 | 1.5784 | 0.2558 | | 1.5874 | 28.0 | 168 | 1.5782 | 0.2558 | | 1.6141 | 29.0 | 174 | 1.5781 | 0.2558 | | 1.5647 | 30.0 | 180 | 1.5780 | 0.2558 | | 1.5647 | 31.0 | 186 | 1.5779 | 0.2558 | | 1.5987 | 32.0 | 192 | 1.5778 | 0.2558 | | 1.5987 | 33.0 | 198 | 1.5777 | 0.2558 | | 1.504 | 34.0 | 204 | 1.5776 | 0.2558 | | 1.5743 | 35.0 | 210 | 1.5775 | 0.2558 | | 1.5743 | 36.0 | 216 | 1.5775 | 0.2558 | | 1.5471 | 37.0 | 222 | 1.5774 | 0.2558 | | 1.5471 | 38.0 | 228 | 1.5774 | 0.2558 | | 1.5808 | 39.0 | 234 | 1.5774 | 0.2558 | | 1.5531 | 40.0 | 240 | 1.5774 | 0.2558 | | 1.5531 | 41.0 | 246 | 1.5773 | 0.2558 | | 1.5447 | 42.0 | 252 | 1.5773 | 0.2558 | | 1.5447 | 43.0 | 258 | 1.5773 | 0.2558 | | 1.5547 | 44.0 | 264 | 1.5773 | 0.2558 | | 1.5706 | 45.0 | 270 | 1.5773 | 0.2558 | | 1.5706 | 46.0 | 276 | 1.5773 | 0.2558 | | 1.569 | 47.0 | 282 | 1.5773 | 0.2558 | | 1.569 | 48.0 | 288 | 1.5773 | 0.2558 | | 1.5551 | 49.0 | 294 | 1.5773 | 0.2558 | | 1.5471 | 50.0 | 300 | 1.5773 | 0.2558 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
nrshoudi/Whisper-base-Arabic-phoneme
nrshoudi
2023-11-25T19:50:16Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "region:us" ]
null
2023-11-25T19:50:14Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer model-index: - name: Whisper-base-Arabic-phoneme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper-base-Arabic-phoneme This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0622 | 1.0 | 546 | 0.2189 | | 0.032 | 2.0 | 1092 | 0.1893 | | 0.0446 | 3.0 | 1638 | 0.1964 | | 0.0211 | 4.0 | 2184 | 0.1998 | | 0.0276 | 5.0 | 2730 | 0.1909 | | 0.0104 | 6.0 | 3276 | 0.2183 | | 0.0114 | 7.0 | 3822 | 0.2049 | | 0.0046 | 8.0 | 4368 | 0.2075 | | 0.0029 | 9.0 | 4914 | 0.2086 | | 0.0018 | 10.0 | 5460 | 0.2151 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
hkivancoral/hushem_1x_beit_base_sgd_00001_fold2
hkivancoral
2023-11-25T19:42:58Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T19:34:57Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_sgd_00001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.26666666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_sgd_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5467 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5555 | 0.2667 | | 1.6026 | 2.0 | 12 | 1.5551 | 0.2667 | | 1.6026 | 3.0 | 18 | 1.5546 | 0.2667 | | 1.5488 | 4.0 | 24 | 1.5542 | 0.2667 | | 1.6016 | 5.0 | 30 | 1.5538 | 0.2667 | | 1.6016 | 6.0 | 36 | 1.5534 | 0.2667 | | 1.5779 | 7.0 | 42 | 1.5530 | 0.2667 | | 1.5779 | 8.0 | 48 | 1.5527 | 0.2667 | | 1.588 | 9.0 | 54 | 1.5523 | 0.2667 | | 1.5533 | 10.0 | 60 | 1.5519 | 0.2667 | | 1.5533 | 11.0 | 66 | 1.5516 | 0.2667 | | 1.5856 | 12.0 | 72 | 1.5513 | 0.2667 | | 1.5856 | 13.0 | 78 | 1.5510 | 0.2667 | | 1.5657 | 14.0 | 84 | 1.5507 | 0.2667 | | 1.5825 | 15.0 | 90 | 1.5503 | 0.2667 | | 1.5825 | 16.0 | 96 | 1.5501 | 0.2667 | | 1.5958 | 17.0 | 102 | 1.5498 | 0.2667 | | 1.5958 | 18.0 | 108 | 1.5495 | 0.2667 | | 1.578 | 19.0 | 114 | 1.5493 | 0.2667 | | 1.5925 | 20.0 | 120 | 1.5491 | 0.2667 | | 1.5925 | 21.0 | 126 | 1.5489 | 0.2667 | | 1.5804 | 22.0 | 132 | 1.5486 | 0.2667 | | 1.5804 | 23.0 | 138 | 1.5484 | 0.2667 | | 1.5969 | 24.0 | 144 | 1.5482 | 0.2667 | | 1.5643 | 25.0 | 150 | 1.5481 | 0.2667 | | 1.5643 | 26.0 | 156 | 1.5479 | 0.2667 | | 1.5656 | 27.0 | 162 | 1.5478 | 0.2667 | | 1.5656 | 28.0 | 168 | 1.5476 | 0.2667 | | 1.5441 | 29.0 | 174 | 1.5475 | 0.2667 | | 1.587 | 30.0 | 180 | 1.5474 | 0.2667 | | 1.587 | 31.0 | 186 | 1.5473 | 0.2667 | | 1.5666 | 32.0 | 192 | 1.5472 | 0.2667 | | 1.5666 | 33.0 | 198 | 1.5471 | 0.2667 | | 1.5492 | 34.0 | 204 | 1.5470 | 0.2667 | | 1.5567 | 35.0 | 210 | 1.5469 | 0.2667 | | 1.5567 | 36.0 | 216 | 1.5469 | 0.2667 | | 1.5593 | 37.0 | 222 | 1.5468 | 0.2667 | | 1.5593 | 38.0 | 228 | 1.5468 | 0.2667 | | 1.5776 | 39.0 | 234 | 1.5468 | 0.2667 | | 1.5552 | 40.0 | 240 | 1.5467 | 0.2667 | | 1.5552 | 41.0 | 246 | 1.5467 | 0.2667 | | 1.5605 | 42.0 | 252 | 1.5467 | 0.2667 | | 1.5605 | 43.0 | 258 | 1.5467 | 0.2667 | | 1.6075 | 44.0 | 264 | 1.5467 | 0.2667 | | 1.5667 | 45.0 | 270 | 1.5467 | 0.2667 | | 1.5667 | 46.0 | 276 | 1.5467 | 0.2667 | | 1.5665 | 47.0 | 282 | 1.5467 | 0.2667 | | 1.5665 | 48.0 | 288 | 1.5467 | 0.2667 | | 1.5544 | 49.0 | 294 | 1.5467 | 0.2667 | | 1.5829 | 50.0 | 300 | 1.5467 | 0.2667 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
jadasdn/asr_model
jadasdn
2023-11-25T19:39:12Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-11-25T19:38:58Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - wer model-index: - name: asr_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # asr_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2363 - Wer: 0.5153 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.4621 | 2.0 | 1000 | 0.4702 | 0.9741 | | 0.4612 | 4.0 | 2000 | 0.4621 | 0.9741 | | 0.4458 | 6.0 | 3000 | 0.4464 | 0.9714 | | 0.384 | 8.0 | 4000 | 0.3853 | 0.8235 | | 0.3065 | 10.0 | 5000 | 0.3166 | 0.7829 | | 0.2861 | 12.0 | 6000 | 0.2809 | 0.6802 | | 0.248 | 14.0 | 7000 | 0.2677 | 0.6051 | | 0.2449 | 16.0 | 8000 | 0.2541 | 0.5778 | | 0.2298 | 18.0 | 9000 | 0.2480 | 0.5710 | | 0.2281 | 20.0 | 10000 | 0.2418 | 0.5505 | | 0.216 | 22.0 | 11000 | 0.2420 | 0.5340 | | 0.2083 | 24.0 | 12000 | 0.2380 | 0.5253 | | 0.1957 | 26.0 | 13000 | 0.2380 | 0.5209 | | 0.1985 | 28.0 | 14000 | 0.2360 | 0.5181 | | 0.2078 | 30.0 | 15000 | 0.2363 | 0.5153 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/Capybara-Tess-Yi-34B-200K-DARE-Ties-3.0bpw-h6-exl2
LoneStriker
2023-11-25T19:22:11Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T19:13:53Z
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE language: - en library_name: transformers pipeline_tag: text-generation --- **NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.2** and **migtissera/Tess-M-v1.3** merged with a new, experimental implementation of "dare ties" via mergekit. See: > Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch https://github.com/yule-BUAA/MergeLM https://github.com/cg123/mergekit/tree/dare-tokenizer Highly experimental and still being tested! But this should yield a better merge than a typical linear/slerp merge or even a ties merge. *** Merged with the following config, and the tokenizer from Yi Llamafied: ``` models: - model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied # no parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.3 parameters: weight: 0.50 density: 0.56 - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.2 parameters: weight: 0.20 density: 0.50 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: 0.50 density: 0.56 merge_method: dare_ties base_model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied parameters: int8_mask: true dtype: bfloat16 ``` Tess 1.2 (at a low weight) and 1.3 were used because, according to the trainer, they were trained on different datasets: https://migel.substack.com/p/learnings-from-training-tess I chose not to include other finetunes, such as Dolphin, because they aren't trained on the 200K base. *** ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP if output doesn't seem right. Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition. *** Credits: https://github.com/cg123/mergekit/tree/dare-tokenizer https://huggingface.co/NousResearch/Nous-Capybara-34B/ https://huggingface.co/migtissera/Tess-M-v1.2 https://huggingface.co/migtissera/Tess-M-v1.3 https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied https://huggingface.co/01-ai/Yi-34B-200K
hkivancoral/hushem_1x_beit_base_sgd_0001_fold5
hkivancoral
2023-11-25T19:21:39Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T19:12:54Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_sgd_0001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.2682926829268293 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_sgd_0001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5504 - Accuracy: 0.2683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6302 | 0.2439 | | 1.5748 | 2.0 | 12 | 1.6252 | 0.2439 | | 1.5748 | 3.0 | 18 | 1.6204 | 0.2439 | | 1.5763 | 4.0 | 24 | 1.6160 | 0.2439 | | 1.56 | 5.0 | 30 | 1.6118 | 0.2439 | | 1.56 | 6.0 | 36 | 1.6079 | 0.2439 | | 1.5722 | 7.0 | 42 | 1.6043 | 0.2439 | | 1.5722 | 8.0 | 48 | 1.6006 | 0.2439 | | 1.5053 | 9.0 | 54 | 1.5970 | 0.2439 | | 1.5617 | 10.0 | 60 | 1.5937 | 0.2439 | | 1.5617 | 11.0 | 66 | 1.5908 | 0.2439 | | 1.5101 | 12.0 | 72 | 1.5876 | 0.2439 | | 1.5101 | 13.0 | 78 | 1.5848 | 0.2683 | | 1.5266 | 14.0 | 84 | 1.5820 | 0.2683 | | 1.4925 | 15.0 | 90 | 1.5796 | 0.2683 | | 1.4925 | 16.0 | 96 | 1.5771 | 0.2683 | | 1.5202 | 17.0 | 102 | 1.5750 | 0.2683 | | 1.5202 | 18.0 | 108 | 1.5729 | 0.2683 | | 1.5168 | 19.0 | 114 | 1.5711 | 0.2683 | | 1.5066 | 20.0 | 120 | 1.5691 | 0.2683 | | 1.5066 | 21.0 | 126 | 1.5674 | 0.2683 | | 1.508 | 22.0 | 132 | 1.5658 | 0.2683 | | 1.508 | 23.0 | 138 | 1.5642 | 0.2683 | | 1.4868 | 24.0 | 144 | 1.5626 | 0.2683 | | 1.5018 | 25.0 | 150 | 1.5612 | 0.2683 | | 1.5018 | 26.0 | 156 | 1.5598 | 0.2683 | | 1.5061 | 27.0 | 162 | 1.5585 | 0.2683 | | 1.5061 | 28.0 | 168 | 1.5574 | 0.2683 | | 1.4922 | 29.0 | 174 | 1.5565 | 0.2683 | | 1.5131 | 30.0 | 180 | 1.5557 | 0.2683 | | 1.5131 | 31.0 | 186 | 1.5547 | 0.2683 | | 1.5054 | 32.0 | 192 | 1.5540 | 0.2683 | | 1.5054 | 33.0 | 198 | 1.5533 | 0.2683 | | 1.4665 | 34.0 | 204 | 1.5527 | 0.2683 | | 1.5093 | 35.0 | 210 | 1.5521 | 0.2683 | | 1.5093 | 36.0 | 216 | 1.5516 | 0.2683 | | 1.5042 | 37.0 | 222 | 1.5513 | 0.2683 | | 1.5042 | 38.0 | 228 | 1.5509 | 0.2683 | | 1.4952 | 39.0 | 234 | 1.5507 | 0.2683 | | 1.4728 | 40.0 | 240 | 1.5505 | 0.2683 | | 1.4728 | 41.0 | 246 | 1.5504 | 0.2683 | | 1.4831 | 42.0 | 252 | 1.5504 | 0.2683 | | 1.4831 | 43.0 | 258 | 1.5504 | 0.2683 | | 1.4991 | 44.0 | 264 | 1.5504 | 0.2683 | | 1.4929 | 45.0 | 270 | 1.5504 | 0.2683 | | 1.4929 | 46.0 | 276 | 1.5504 | 0.2683 | | 1.5005 | 47.0 | 282 | 1.5504 | 0.2683 | | 1.5005 | 48.0 | 288 | 1.5504 | 0.2683 | | 1.4392 | 49.0 | 294 | 1.5504 | 0.2683 | | 1.4753 | 50.0 | 300 | 1.5504 | 0.2683 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
bartowski/Tess-XS-v1-3-yarn-128K-exl2
bartowski
2023-11-25T19:21:11Z
1
1
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2023-11-25T16:18:43Z
--- license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Tess-XS-v1-3-yarn-128K Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.9">turboderp's ExLlamaV2 v0.0.9</a> for quantization. Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co/migtissera/Tess-XS-v1-3-yarn-128K <a href="https://huggingface.co/bartowski/Tess-XS-v1-3-yarn-128K-exl2/tree/4_0">4.0 bits per weight</a> <a href="https://huggingface.co/bartowski/Tess-XS-v1-3-yarn-128K-exl2/tree/5_0">5.0 bits per weight</a> <a href="https://huggingface.co/bartowski/Tess-XS-v1-3-yarn-128K-exl2/tree/6_0">6.0 bits per weight</a> <a href="https://huggingface.co/bartowski/Tess-XS-v1-3-yarn-128K-exl2/tree/8_0">8.0 bits per weight</a> ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Tess-XS-v1-3-yarn-128K-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Tess-XS-v1-3-yarn-128K-exl2`: ```shell mkdir Tess-XS-v1-3-yarn-128K-exl2 huggingface-cli download bartowski/Tess-XS-v1-3-yarn-128K-exl2 --local-dir Tess-XS-v1-3-yarn-128K-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Tess-XS-v1-3-yarn-128K-exl2 huggingface-cli download bartowski/Tess-XS-v1-3-yarn-128K-exl2 --revision 4_0 --local-dir Tess-XS-v1-3-yarn-128K-exl2 --local-dir-use-symlinks False ```
hkivancoral/hushem_1x_beit_base_sgd_0001_fold3
hkivancoral
2023-11-25T19:03:03Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T18:54:17Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_beit_base_sgd_0001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.23255813953488372 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_beit_base_sgd_0001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5102 - Accuracy: 0.2326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5815 | 0.2558 | | 1.5795 | 2.0 | 12 | 1.5768 | 0.2558 | | 1.5795 | 3.0 | 18 | 1.5725 | 0.2558 | | 1.5843 | 4.0 | 24 | 1.5685 | 0.2558 | | 1.5064 | 5.0 | 30 | 1.5650 | 0.2558 | | 1.5064 | 6.0 | 36 | 1.5613 | 0.2558 | | 1.5264 | 7.0 | 42 | 1.5577 | 0.2558 | | 1.5264 | 8.0 | 48 | 1.5544 | 0.2558 | | 1.5341 | 9.0 | 54 | 1.5511 | 0.2558 | | 1.5468 | 10.0 | 60 | 1.5482 | 0.2558 | | 1.5468 | 11.0 | 66 | 1.5458 | 0.2558 | | 1.5265 | 12.0 | 72 | 1.5432 | 0.2558 | | 1.5265 | 13.0 | 78 | 1.5409 | 0.2558 | | 1.4949 | 14.0 | 84 | 1.5386 | 0.2558 | | 1.5252 | 15.0 | 90 | 1.5362 | 0.2558 | | 1.5252 | 16.0 | 96 | 1.5341 | 0.2558 | | 1.5295 | 17.0 | 102 | 1.5321 | 0.2558 | | 1.5295 | 18.0 | 108 | 1.5302 | 0.2558 | | 1.4916 | 19.0 | 114 | 1.5284 | 0.2558 | | 1.4984 | 20.0 | 120 | 1.5267 | 0.2326 | | 1.4984 | 21.0 | 126 | 1.5250 | 0.2326 | | 1.5211 | 22.0 | 132 | 1.5235 | 0.2326 | | 1.5211 | 23.0 | 138 | 1.5222 | 0.2326 | | 1.506 | 24.0 | 144 | 1.5209 | 0.2326 | | 1.483 | 25.0 | 150 | 1.5197 | 0.2326 | | 1.483 | 26.0 | 156 | 1.5185 | 0.2326 | | 1.5184 | 27.0 | 162 | 1.5173 | 0.2326 | | 1.5184 | 28.0 | 168 | 1.5163 | 0.2326 | | 1.536 | 29.0 | 174 | 1.5154 | 0.2326 | | 1.4949 | 30.0 | 180 | 1.5146 | 0.2326 | | 1.4949 | 31.0 | 186 | 1.5138 | 0.2326 | | 1.5188 | 32.0 | 192 | 1.5132 | 0.2326 | | 1.5188 | 33.0 | 198 | 1.5126 | 0.2326 | | 1.4387 | 34.0 | 204 | 1.5120 | 0.2326 | | 1.4953 | 35.0 | 210 | 1.5116 | 0.2326 | | 1.4953 | 36.0 | 216 | 1.5112 | 0.2326 | | 1.4703 | 37.0 | 222 | 1.5108 | 0.2326 | | 1.4703 | 38.0 | 228 | 1.5106 | 0.2326 | | 1.5017 | 39.0 | 234 | 1.5104 | 0.2326 | | 1.4757 | 40.0 | 240 | 1.5103 | 0.2326 | | 1.4757 | 41.0 | 246 | 1.5102 | 0.2326 | | 1.4714 | 42.0 | 252 | 1.5102 | 0.2326 | | 1.4714 | 43.0 | 258 | 1.5102 | 0.2326 | | 1.4776 | 44.0 | 264 | 1.5102 | 0.2326 | | 1.4921 | 45.0 | 270 | 1.5102 | 0.2326 | | 1.4921 | 46.0 | 276 | 1.5102 | 0.2326 | | 1.4896 | 47.0 | 282 | 1.5102 | 0.2326 | | 1.4896 | 48.0 | 288 | 1.5102 | 0.2326 | | 1.4789 | 49.0 | 294 | 1.5102 | 0.2326 | | 1.4671 | 50.0 | 300 | 1.5102 | 0.2326 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
emptyngton/mistral_instruct_generation
emptyngton
2023-11-25T19:01:03Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2023-11-25T19:00:42Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - generated_from_trainer model-index: - name: mistral_instruct_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5255 | 0.02 | 20 | 1.3722 | | 1.4519 | 0.03 | 40 | 1.3348 | | 1.4296 | 0.05 | 60 | 1.3226 | | 1.4289 | 0.06 | 80 | 1.3142 | | 1.4278 | 0.08 | 100 | 1.3094 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
janakipanneerselvam/segformer-finetuned-TMSL21_Sunlit_Tomatoes
janakipanneerselvam
2023-11-25T19:00:37Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "image-segmentation", "vision", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-11-25T17:07:03Z
--- license: other base_model: nvidia/mit-b0 tags: - image-segmentation - vision - generated_from_trainer model-index: - name: segformer-finetuned-TMSL21_Sunlit_Tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-finetuned-TMSL21_Sunlit_Tomatoes This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the janakipanneerselvam/TMSL21_Sunlit_Tomatoes dataset. It achieves the following results on the evaluation set: - Loss: 0.0197 - Mean Iou: 0.0 - Mean Accuracy: nan - Overall Accuracy: nan - Accuracy Sunlit: nan - Iou Sunlit: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Sunlit | Iou Sunlit | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------:|:----------:| | 0.0164 | 0.24 | 100 | 0.0197 | 0.0 | nan | nan | nan | 0.0 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
PracticeLLM/Custom-KoLLM-13B-v1
PracticeLLM
2023-11-25T18:49:48Z
2,249
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T17:17:29Z
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **⭐My custom LLM 13B⭐** ## Model Details **Model Developers** - Kyujin Han (kyujinpy) **Model Architecture** - My custom LLM 13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** - [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) **Training Dataset** - [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). --- # Model comparisons1 > Ko-LLM leaderboard(11/23; [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)) | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | **⭐My custom LLM 13B⭐** | 50.19 | 45.99 | 56.93 | 41.78 | 41.66 | **64.58** | --- # Model comparisons2 > AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness) | Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | | **⭐My custom LLM 13B⭐** | 0.7987 | 0.8269 | 0.4994 | 0.5660 | 0.3343 | 0.5060 | 0.6984 | 0.9723 | | [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) | 0.7768 | 0.8128 | 0.4999 | 0.5127 | 0.3988 | 0.7038 | 0.5870 | 0.9748 | --- # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "PracticeLLM/Custom-KoLLM-13B-v1" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
djisa/reel
djisa
2023-11-25T18:40:27Z
0
0
null
[ "region:us" ]
null
2023-11-25T18:35:51Z
"Always stay in control of your life. Consult with financial psychologist Dr. Money for guidance." Сцена 1: We see our financial psychologist standing behind the coffee shop counter, sipping on his favorite latte. He says, "I love coffee! It helps me maintain calm and confidence when working with my clients." Scene 2: At that moment, a barista enters and says, "Hey, Dr. Money! I know you're a psychologist, but can you give me some advice? I can't decide which song to choose for my new playlist." Scene 3: Dr. Money smiles and says, "Sure, I can help you. But before that, let me tell you a story. When I was young, I worked as a barista in this same coffee shop. And I remember always playing the same song on repeat when there were a lot of customers. It helped me stay calm and in control." Scene 4: The barista looks surprised at Dr. Money and says, "Wow! I didn't know you used to work here. Thanks for the advice, I'll try playing one song on repeat and see how it goes." Scene 5: Dr. Money smiles and says, "Glad I could help you. And remember, there's always a way to stay calm and in control of your life. If you ever want to discuss this further, I'm always ready to help." End of the video: We see Dr. Money leaving the coffee shop and smiling at the camera. Then we see the text: "Never lose control over your life. Consult with financial psychologist Dr. Money for guidance."
AkshatSG/ppo-LunarLander-v2
AkshatSG
2023-11-25T18:38:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T18:37:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.53 +/- 16.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lunalade/gpt2.fairyTales
lunalade
2023-11-25T18:35:08Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T17:44:31Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2.fairyTales results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2.fairyTales This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
RajuEEE/TRL
RajuEEE
2023-11-25T18:33:46Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2-large", "base_model:adapter:openai-community/gpt2-large", "region:us" ]
null
2023-08-28T02:47:05Z
--- library_name: peft base_model: gpt2-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
judy93536/distilroberta-rbm231k-ep20-op40-phrase5k
judy93536
2023-11-25T18:28:16Z
5
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:judy93536/distilroberta-rbm231k-ep20-op40", "base_model:finetune:judy93536/distilroberta-rbm231k-ep20-op40", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-25T12:34:35Z
--- license: apache-2.0 base_model: judy93536/distilroberta-rbm231k-ep20-op40 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilroberta-rbm231k-ep20-op40-phrase5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-rbm231k-ep20-op40-phrase5k This model is a fine-tuned version of [judy93536/distilroberta-rbm231k-ep20-op40](https://huggingface.co/judy93536/distilroberta-rbm231k-ep20-op40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1735 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.113335054745316e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 1.0641 | 0.6156 | | 1.0561 | 2.0 | 500 | 0.9262 | 0.6136 | | 1.0561 | 3.0 | 750 | 0.8124 | 0.6136 | | 0.8474 | 4.0 | 1000 | 0.7271 | 0.6476 | | 0.8474 | 5.0 | 1250 | 0.6283 | 0.7237 | | 0.6495 | 6.0 | 1500 | 0.5066 | 0.7758 | | 0.6495 | 7.0 | 1750 | 0.3436 | 0.8979 | | 0.3657 | 8.0 | 2000 | 0.2598 | 0.9189 | | 0.3657 | 9.0 | 2250 | 0.2117 | 0.9299 | | 0.2023 | 10.0 | 2500 | 0.2022 | 0.9279 | | 0.2023 | 11.0 | 2750 | 0.1787 | 0.9329 | | 0.1518 | 12.0 | 3000 | 0.1632 | 0.9429 | | 0.1518 | 13.0 | 3250 | 0.1883 | 0.9369 | | 0.1324 | 14.0 | 3500 | 0.1699 | 0.9409 | | 0.1324 | 15.0 | 3750 | 0.1704 | 0.9439 | | 0.1225 | 16.0 | 4000 | 0.1755 | 0.9429 | | 0.1225 | 17.0 | 4250 | 0.1743 | 0.9439 | | 0.119 | 18.0 | 4500 | 0.1726 | 0.9459 | | 0.119 | 19.0 | 4750 | 0.1723 | 0.9459 | | 0.1145 | 20.0 | 5000 | 0.1735 | 0.9459 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
migtissera/Tess-XS-Creative-v1.0
migtissera
2023-11-25T18:14:46Z
8
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-16T00:33:24Z
--- license: apache-2.0 --- # Note: This version is experimental and have been depracated. Please use the stable release Tess-XS-v1.3-yarn-128K: https://huggingface.co/migtissera/Tess-XS-v1-3-yarn-128K # Tess ![Tess](https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png) Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.0 was trained on the Mistral-7B base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? ASSISTANT: ``` # Sample Conversation ``` You: What's the relationship between an electric field and a magnetic field, given the same time and location? To fully understand the relationship between electric fields and magnetic fields, we need to consider the phenomenon of electromagnetism, which is the interaction between electric charges and currents that gives rise to both electric and magnetic fields. Electric fields are created by stationary electric charges or by changing magnetic fields. A stationary electric charge produces an electric field that extends outward in all directions from the charge. The strength of the electric field at any point depends on the charge and the distance from the charge. The direction of the electric field lines is determined by the convention that positive charges attract negative charges and repel other positive charges. Magnetic fields, on the other hand, are created by moving electric charges and by changes in electric fields. A moving electric charge creates a magnetic field that surrounds the charge, and it flows in a circular motion. The magnetic field lines are usually depicted as spiraling outward from the moving charge. The strength of the magnetic field at any point depends on the current and the distance from the current-carrying wire. The relationship between electric and magnetic fields is described by Maxwell's equations, which are a set of differential equations governing the behavior of electromagnetic fields. One of Maxwell's equations, the Faraday's law of induction, states that a changing magnetic field induces an electric field. This means that changes in the magnetic field can cause electric fields to be created, and vice versa. In summary, electric and magnetic fields are intimately related, and one cannot exist without the other. A changing electric field creates a magnetic field, and a changing magnetic field creates an electric field. The relationship between them is described by Maxwell's equations, which form the foundation of classical electromagnetism. You: The known physics states that the electric field and the magnetic field for a given object are proportional to each other. i.e. an increase in the magnetic field will result in an increase in the corresponding electric field. But in some UAP events, we have observed that a decrease in electric field actually causing an increase in the corresponding magnetic field. This is not a fault in our measurements. Can we theorize what could be the case here? The statement that an increase in the magnetic field will result in an increase in the corresponding electric field is not correct. It's important to understand that the electric field and the magnetic field are not proportional to each other, and they don't always follow a simple relationship. The relationship between the electric and magnetic fields is more complex and is described by Maxwell's equations, which include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and vice versa. However, it's not clear whether you are referring to a specific event or a general observation about UAPs. If you are referring to a particular event in which a decreased electric field is observed, and then an increased magnetic field is observed, this could be due to a number of factors. It's possible that the observation is an artifact of the measurement process, or it could be caused by a phenomenon that is not fully understood yet. It's important to note that our understanding of electromagnetism is based on our current scientific knowledge, and there may be limitations to our understanding of specific phenomena. Additionally, the field of unidentified aerial phenomena (UAP) is still in its infancy, and there is much we do not yet know about these types of events. If you are referring to a specific event that you believe demonstrates this relationship, it would be beneficial to provide more details about the observation and the measurement process, as well as any additional context that could help explain the apparent phenomenon. You: Hang on, so the Maxwell's equations don't actually say that the electric and magnetic fields are proportional to each other? No, Maxwell's equations do not state that the electric and magnetic fields are proportional to each other. They describe the relationship between electric and magnetic fields, but the relationship is more complex than a simple proportionality. Maxwell's equations include the Faraday's law of induction, which states that a changing magnetic field induces an electric field, and this electric field results from the magnetic field changing with time. To simplify the relationship, it might be helpful to consider that a magnetic field can be created by an electric current or by a changing electric field. Similarly, an electric field can be created by a magnetic field that is changing with time. However, the relationship between the electric and magnetic fields is not linear, and it is governed by the laws of electromagnetism. It's important to note that the statement about the proportion of electric and magnetic fields is an oversimplification. The relationship between them is more accurately described by the complex equations of electromagnetism, which include Maxwell's equations. You: Thank you for clarifying, that's really helpful. Can you explain to me the complex equations of electromagnetism now? The complex equations of electromagnetism are Maxwell's equations, which include four partial differential equations that describe the behavior of electric and magnetic fields. These equations are: - Gauss's law for electricity: ∇⋅E = 4πκρ where ρ is the electric charge density. - Gauss's law for magnetism: ∇⋅B = 0 (because there are no magnetic monopoles). - Faraday's law of induction: ∇ × E = -∂B/∂t which gives us the electric field of a changing magnetic field. - Ampère-Maxwell law: ∇ × B = μ0(j + ε0∂E/∂t) which gives us the magnetic field of a changing electric field. These equations, along with specific boundary conditions, completely describe the interaction of electric and magnetic fields in the classical theory of electromagnetism. They are complex because they involve the derivatives of the fields with respect to time and space, and they must be solved to understand the behavior of electromagnetic phenomena. You: I notice that you put a value for the permittivity of free space in Gauss's law for electricity. Can you write the equation in general terms? ∇⋅E = 4πκρ This is the general form of Gauss's law for electricity. κ is the electric constant, which has the value 8.854187817... × 10^-12 F/m (or C²/Nm²) in SI units. This value is often called the permittivity of free space, but it is a dimensionless constant that relates the electric field to the charge density. In this equation, the divergence of the electric field (∇⋅E) is equal to the charge density (ρ) multiplied by 4πκ. The divergence is a measure of how much the electric field points towards charges. If there is a positive charge, the electric field will point inward. If there is a negative charge, the electric field will point outward. This equation is the same as Coulomb's law but in a more compact and general form. It can be used to predict the behavior of electric fields in any situation involving charges, whether they are static or moving. You: Thank you! You're great. Now can we move on to Quantum domain. What is the leading theory of electromagnetism in quantum physics? Quantum mechanics is the leading theory of electromagnetism in the quantum domain. It provides a framework for understanding the behavior of subatomic particles and fields, including electromagnetism. In quantum mechanics, electromagnetic fields are described by quantum fields, which are mathematical constructs that represent the energy and momentum of the field. These fields are quantized, meaning that they can only exist in discrete amounts as particles called photons. The most fundamental principle in quantum mechanics is the uncertainty principle, which states that it is impossible to measure both the position and the momentum of a particle with arbitrary precision. This principle has implications for how we understand the behavior of electromagnetic fields at the quantum level. Quantum mechanics also predicts the existence of virtual particles, which are temporary quantum fluctuations that disappear almost immediately after they appear. These virtual particles can interact with each other and with real particles, leading to a complex interplay that affects the behavior of electromagnetic fields. In summary, quantum mechanics provides a framework for understanding electromagnetism in the quantum domain. It replaces classical concepts of fields with a quantum mechanical description that includes uncertainty, virtual particles, and a quantized interpretation of the field. You: What is QED? QED stands for Quantum Electrodynamics. It is a theory in quantum field theory that describes the interactions between electrons and photons. It is based on quantum mechanics and special relativity, and it is one of the most successful theories in physics, having made many precise predictions that have been confirmed by experiment. In QED, electrons and photons are treated as quantum particles, which means they have wave-particle duality and can exhibit both wave-like and particle-like properties. The theory also accounts for the interaction between these particles. QED is characterized by the fine structure constant, which is a dimensionless constant that describes the strength of the electromagnetic force. It is very small, which indicates that the electromagnetic force is much weaker than the other fundamental forces. The theory has been tested extensively and has stood up to every experimental challenge. It has been crucial in the development of modern technology, such as lasers and semiconductors, and it continues to be an important tool for understanding high-energy processes in particle physics. ```
smarthi/zephyr-support-chatbot
smarthi
2023-11-25T18:12:31Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
2023-11-25T17:34:39Z
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - generated_from_trainer model-index: - name: zephyr-support-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-support-chatbot This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Ahmedkltn/ppo-Huggy
Ahmedkltn
2023-11-25T18:10:16Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-25T18:10:08Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Ahmedkltn/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Parleatacoeur/Llama-2-7b-chat-hf-leyes-peruanas
Parleatacoeur
2023-11-25T18:04:48Z
5
0
peft
[ "peft", "pytorch", "safetensors", "llama", "region:us" ]
null
2023-11-19T07:40:16Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
askatasuna/tokipona_to_eng_model_v1
askatasuna
2023-11-25T18:02:36Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-25T13:57:44Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - bleu model-index: - name: tokipona_to_eng_model_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tokipona_to_eng_model_v1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2414 - Bleu: 25.378 - Gen Len: 8.8964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.8666 | 1.0 | 1430 | 1.4045 | 17.3719 | 8.7792 | | 1.4468 | 2.0 | 2860 | 1.2761 | 20.4928 | 8.9122 | | 1.2337 | 3.0 | 4290 | 1.2259 | 22.5629 | 8.8094 | | 1.1073 | 4.0 | 5720 | 1.2052 | 23.2202 | 8.9101 | | 1.0195 | 5.0 | 7150 | 1.2041 | 24.1567 | 8.9248 | | 0.9152 | 6.0 | 8580 | 1.2082 | 24.1598 | 8.9977 | | 0.8753 | 7.0 | 10010 | 1.2128 | 24.7257 | 8.8864 | | 0.818 | 8.0 | 11440 | 1.2249 | 25.0503 | 8.9076 | | 0.7854 | 9.0 | 12870 | 1.2341 | 25.0301 | 8.9097 | | 0.7655 | 10.0 | 14300 | 1.2414 | 25.378 | 8.8964 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
maddes8cht/georgesung-open_llama_7b_qlora_uncensored-gguf
maddes8cht
2023-11-25T17:57:51Z
141
1
null
[ "gguf", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-25T11:17:35Z
--- license: apache-2.0 datasets: - ehartford/wizard_vicuna_70k_unfiltered --- [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]() I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information # open_llama_7b_qlora_uncensored - GGUF - Model creator: [georgesung](https://huggingface.co/georgesung) - Original model: [open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored) OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license. # About GGUF format `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. A growing list of Software is using it and can therefore use this model. The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov # Quantization variants There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you: # Legacy quants Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. ## Note: Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions. (This mainly refers to Falcon 7b and Starcoder models) # K-quants K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load. So, if possible, use K-quants. With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences. --- # Original Model Card: # Overview Fine-tuned [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered). Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~18 hours to train. # Prompt style The model was trained with the following prompt style: ``` ### HUMAN: Hello ### RESPONSE: Hi, how are you? ### HUMAN: I'm fine. ### RESPONSE: How can I help you? ... ``` # Training code Code used to train the model is available [here](https://github.com/georgesung/llm_qlora). # Demo For a Gradio chat application using this model, clone [this HuggingFace Space](https://huggingface.co/spaces/georgesung/open_llama_7b_qlora_uncensored_chat/tree/main) and run it on top of a GPU instance. The basic T4 GPU instance will work. # Blog post Since this was my first time fine-tuning an LLM, I also wrote an accompanying blog post about how I performed the training :) https://georgesung.github.io/ai/qlora-ift/ ***End of original Model File*** --- ## Please consider to support my work **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community. <center> [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io) [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911) [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht) [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht) [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966) </center>
HL02/math_llama-2-7b
HL02
2023-11-25T17:54:27Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-11-25T17:51:05Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
TheBloke/MysticFusion-13B-GPTQ
TheBloke
2023-11-25T17:49:57Z
40
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "base_model:Walmart-the-bag/MysticFusion-13B", "base_model:quantized:Walmart-the-bag/MysticFusion-13B", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-25T17:05:21Z
--- base_model: Walmart-the-bag/MysticFusion-13B inference: false language: - en license: llama2 model_creator: wbag model_name: MysticFusion 13B model_type: llama pipeline_tag: text-generation prompt_template: "### Instruction: \n{prompt}\n### Response:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MysticFusion 13B - GPTQ - Model creator: [wbag](https://huggingface.co/Walmart-the-bag) - Original model: [MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- description start --> # Description This repo contains GPTQ model files for [wbag's MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MysticFusion-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF) * [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-Tiefighter ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/MysticFusion-13B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MysticFusion-13B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `MysticFusion-13B-GPTQ`: ```shell mkdir MysticFusion-13B-GPTQ huggingface-cli download TheBloke/MysticFusion-13B-GPTQ --local-dir MysticFusion-13B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir MysticFusion-13B-GPTQ huggingface-cli download TheBloke/MysticFusion-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir MysticFusion-13B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir MysticFusion-13B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MysticFusion-13B-GPTQ --local-dir MysticFusion-13B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MysticFusion-13B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/MysticFusion-13B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MysticFusion-13B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MysticFusion-13B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/MysticFusion-13B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: wbag's MysticFusion 13B YAML: ``` models: - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.3 - model: NeverSleep/Noromaid-13b-v0.1.1 parameters: weight: 0.5 - model: lmsys/vicuna-13b-v1.5 parameters: weight: 0.2 merge_method: linear dtype: float16 ``` # Prompt Template: ### Alpaca ``` ### Instruction: ### Response: ```
Weyaxi/Luban-Marcoroni-13B-v1
Weyaxi
2023-11-25T17:48:08Z
24
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-13T09:25:43Z
--- license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca language: - en --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) and [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B) using ties merge. ### *Weights* - [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B): 0.5 - [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B): 0.3 ### *Density* - [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B): 0.5 - [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B): 0.5 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.16 | | ARC (25-shot) | 63.65 | | HellaSwag (10-shot) | 82.92 | | MMLU (5-shot) | 58.7 | | TruthfulQA (0-shot) | 55.55 | | Winogrande (5-shot) | 77.03 | | GSM8K (5-shot) | 10.01 | | DROP (3-shot) | 10.25 |
Weyaxi/Luban-Marcoroni-13B-v3
Weyaxi
2023-11-25T17:47:56Z
1,549
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-13T20:09:01Z
--- license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca language: - en --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) and [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B) using ties merge. ### *Weights* - [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B): 0.5 - [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B): 0.3 ### *Density* - [Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B): 0.5 - [Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B): 0.5 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.13 | | ARC (25-shot) | 63.74 | | HellaSwag (10-shot) | 82.88 | | MMLU (5-shot) | 58.64 | | TruthfulQA (0-shot) | 55.56 | | Winogrande (5-shot) | 76.87 | | GSM8K (5-shot) | 9.93 | | DROP (3-shot) | 10.25 |
Weyaxi/2x-LoRA-Assemble-13B
Weyaxi
2023-11-25T17:47:36Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-24T19:34:47Z
--- license: cc-by-nc-4.0 language: - en --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble) and [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble) using ties merge. (It is weird, it was a mistake but the score is 0.01 point better) ### *Weights* - [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble): 0.5 - [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble): 0.3 ### *Density* - [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble): 0.5 - [oh-yeontaek/llama-2-13B-LoRA-assemble](https://huggingface.co/oh-yeontaek/llama-2-13B-LoRA-assemble): 0.5 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__2x-LoRA-Assemble-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.52 | | ARC (25-shot) | 63.65 | | HellaSwag (10-shot) | 83.47 | | MMLU (5-shot) | 59.82 | | TruthfulQA (0-shot) | 55.94 | | Winogrande (5-shot) | 76.48 | | GSM8K (5-shot) | 9.25 | | DROP (3-shot) | 12.01 |
Weyaxi/SlimOpenOrca-Mistral-7B-v2
Weyaxi
2023-11-25T17:47:23Z
27
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-11T12:46:08Z
--- license: cc-by-nc-4.0 --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge. ### *Weights* - [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.6 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.4 ### *Density* - [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.96 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.41 | | MMLU (5-shot) | 62.05 | | TruthfulQA (0-shot) | 56.65 | | Winogrande (5-shot) | 77.58 | | GSM8K (5-shot) | 18.95 | | DROP (3-shot) | 9.19 |
Weyaxi/PuddleJumper-Platypus2-13B-QLoRA-0.80-epoch
Weyaxi
2023-11-25T17:46:36Z
11
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "dataset:totally-not-an-llm/EverythingLM-data-V2", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-27T22:19:46Z
--- license: llama2 datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - totally-not-an-llm/EverythingLM-data-V2 language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/SNX5IX0YqN_MUhUzdMjJc.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # PuddleJumper-Platypus2-13B-QLoRA-0.80-epoch PuddleJumper-Platypus2-13B-QLoRA-0.80-epoch is a merge of [totally-not-an-llm/PuddleJumper-13b](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b) and [Platypus2-13B-QLoRA-0.80-epoch](https://huggingface.co/Weyaxi/Platypus2-13B-QLoRA-0.80-epoch) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__PuddleJumper-Platypus2-13B-QLoRA-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 47.85 | | ARC (25-shot) | 54.52 | | HellaSwag (10-shot) | 79.36 | | MMLU (5-shot) | 55.15 | | TruthfulQA (0-shot) | 54.32 | | Winogrande (5-shot) | 71.11 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 20.49 |
Weyaxi/Nous-Hermes-Platypus2-13B-QLoRA-0.80-epoch
Weyaxi
2023-11-25T17:46:14Z
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-27T20:58:35Z
--- license: llama2 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/SNX5IX0YqN_MUhUzdMjJc.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # Nous-Hermes-Platypus2-13B-QLoRA-0.80-epoch Nous-Hermes-Platypus2-13B-QLoRA-0.80-epoch is a merge of [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) and [Platypus2-13B-QLoRA-0.80-epoch](https://huggingface.co/Weyaxi/Platypus2-13B-QLoRA-0.80-epoch) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Nous-Hermes-Platypus2-13B-QLoRA-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.89 | | ARC (25-shot) | 59.9 | | HellaSwag (10-shot) | 83.29 | | MMLU (5-shot) | 56.69 | | TruthfulQA (0-shot) | 51.08 | | Winogrande (5-shot) | 75.22 | | GSM8K (5-shot) | 1.44 | | DROP (3-shot) | 42.65 |
Weyaxi/MythoMix-Platypus2-13B-QLoRA-0.80-epoch
Weyaxi
2023-11-25T17:46:02Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-27T21:54:22Z
--- license: llama2 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/SNX5IX0YqN_MUhUzdMjJc.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # MythoMix-Platypus2-13B-QLoRA-0.80-epoch/ MythoMix-Platypus2-13B-QLoRA-0.80-epoch is a merge of [Gryphe/MythoMix-L2-13b](https://huggingface.co/Gryphe/MythoMix-L2-13b) and [Platypus2-13B-QLoRA-0.80-epoch](https://huggingface.co/Weyaxi/Platypus2-13B-QLoRA-0.80-epoch) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__MythoMix-Platypus2-13B-QLoRA-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.91 | | ARC (25-shot) | 60.32 | | HellaSwag (10-shot) | 83.72 | | MMLU (5-shot) | 55.74 | | TruthfulQA (0-shot) | 52.18 | | Winogrande (5-shot) | 75.53 | | GSM8K (5-shot) | 0.91 | | DROP (3-shot) | 41.98 |
Weyaxi/Ensemble5-Platypus2-13B-QLora-0.80-epoch
Weyaxi
2023-11-25T17:44:56Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-30T09:15:18Z
--- license: llama2 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/SNX5IX0YqN_MUhUzdMjJc.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # Ensemble5-Platypus2-13B-QLora-0.80-epoch Ensemble5-Platypus2-13B-QLora-0.80-epoch is a merge of [yeontaek/llama-2-13B-ensemble-v5](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v5) and [Platypus2-13B-QLoRA-0.80-epoch](https://huggingface.co/Weyaxi/Platypus2-13B-QLoRA-0.80-epoch) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Ensemble5-Platypus2-13B-QLora-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.5 | | ARC (25-shot) | 59.73 | | HellaSwag (10-shot) | 82.66 | | MMLU (5-shot) | 56.94 | | TruthfulQA (0-shot) | 52.92 | | Winogrande (5-shot) | 74.43 | | GSM8K (5-shot) | 1.9 | | DROP (3-shot) | 17.93 |
Andrei21/iti
Andrei21
2023-11-25T17:42:17Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-11-25T17:41:02Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weyaxi/Airboros2.1-Platypus2-13B-QLora-0.80-epoch
Weyaxi
2023-11-25T17:41:03Z
15
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-30T09:46:21Z
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Airboros2.1-Platypus2-13B-QLora-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.17 | | ARC (25-shot) | 58.96 | | HellaSwag (10-shot) | 82.46 | | MMLU (5-shot) | 54.62 | | TruthfulQA (0-shot) | 47.71 | | Winogrande (5-shot) | 75.14 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 39.32 |
Weyaxi/MythicalDestroyerV2-Platypus2-13B-QLora-0.80-epoch
Weyaxi
2023-11-25T17:40:58Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-30T10:05:24Z
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__MythicalDestroyerV2-Platypus2-13B-QLora-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 47.95 | | ARC (25-shot) | 57.34 | | HellaSwag (10-shot) | 81.24 | | MMLU (5-shot) | 55.64 | | TruthfulQA (0-shot) | 55.98 | | Winogrande (5-shot) | 73.88 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 11.55 |
Weyaxi/OpenOrcaPlatypus2-Platypus2-13B-QLora-0.80-epoch
Weyaxi
2023-11-25T17:40:49Z
70
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-30T10:45:51Z
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__OpenOrcaPlatypus2-Platypus2-13B-QLora-0.80-epoch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.71 | | ARC (25-shot) | 59.81 | | HellaSwag (10-shot) | 82.69 | | MMLU (5-shot) | 56.96 | | TruthfulQA (0-shot) | 52.92 | | Winogrande (5-shot) | 74.43 | | GSM8K (5-shot) | 2.35 | | DROP (3-shot) | 18.83 |
Weyaxi/Nova-13B-50-step
Weyaxi
2023-11-25T17:40:31Z
68
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-05T20:35:08Z
--- license: llama2 --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Nova-13B-50-step) | Metric | Value | |-----------------------|---------------------------| | Avg. | 48.93 | | ARC (25-shot) | 61.6 | | HellaSwag (10-shot) | 82.31 | | MMLU (5-shot) | 57.27 | | TruthfulQA (0-shot) | 51.53 | | Winogrande (5-shot) | 76.56 | | GSM8K (5-shot) | 4.4 | | DROP (3-shot) | 8.82 |
Weyaxi/2x-LoRA-Assemble-Platypus2-13B
Weyaxi
2023-11-25T17:40:18Z
70
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T10:04:27Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__2x-LoRA-Assemble-Platypus2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.13 | | ARC (25-shot) | 60.58 | | HellaSwag (10-shot) | 82.56 | | MMLU (5-shot) | 58.25 | | TruthfulQA (0-shot) | 54.77 | | Winogrande (5-shot) | 74.9 | | GSM8K (5-shot) | 0.91 | | DROP (3-shot) | 25.96 |
Weyaxi/2x-LoRA-Assemble-Nova-13B
Weyaxi
2023-11-25T17:40:13Z
20
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T11:30:36Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__2x-LoRA-Assemble-Nova-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 50.34 | | ARC (25-shot) | 62.63 | | HellaSwag (10-shot) | 83.24 | | MMLU (5-shot) | 58.64 | | TruthfulQA (0-shot) | 51.88 | | Winogrande (5-shot) | 76.95 | | GSM8K (5-shot) | 10.24 | | DROP (3-shot) | 8.8 |
Weyaxi/Chat-AYB-Platypus2-13B
Weyaxi
2023-11-25T17:40:02Z
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T12:32:38Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Chat-AYB-Platypus2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.39 | | ARC (25-shot) | 60.49 | | HellaSwag (10-shot) | 84.03 | | MMLU (5-shot) | 57.83 | | TruthfulQA (0-shot) | 54.52 | | Winogrande (5-shot) | 75.77 | | GSM8K (5-shot) | 2.96 | | DROP (3-shot) | 38.12 |
Weyaxi/GenAI-Nova-13B
Weyaxi
2023-11-25T17:39:57Z
15
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T12:36:58Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__GenAI-Nova-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.53 | | ARC (25-shot) | 62.29 | | HellaSwag (10-shot) | 83.27 | | MMLU (5-shot) | 59.47 | | TruthfulQA (0-shot) | 51.79 | | Winogrande (5-shot) | 77.35 | | GSM8K (5-shot) | 7.73 | | DROP (3-shot) | 18.82 |
Weyaxi/TekniumAiroboros-Nebula-7B
Weyaxi
2023-11-25T17:39:52Z
1,525
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-08T13:57:10Z
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.82 | | ARC (25-shot) | 57.17 | | HellaSwag (10-shot) | 81.72 | | MMLU (5-shot) | 55.25 | | TruthfulQA (0-shot) | 51.64 | | Winogrande (5-shot) | 73.24 | | GSM8K (5-shot) | 9.4 | | DROP (3-shot) | 41.33 |
Weyaxi/Platypus2-13B-QLoRA-0.80-epoch
Weyaxi
2023-11-25T17:39:15Z
0
0
null
[ "region:us" ]
null
2023-08-27T17:54:46Z
Found. Redirecting to https://cdn-lfs.hf.co/repos/11/49/1149d1da384307bcf079a4457cf9db31c771ee2dbbf36791348ecdadee7519de/1dfccfd7e08d6db4364995b538252adf6eb5226822f7d4ecd869ae51e5a28e40?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739271219&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTI3MTIxOX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy8xMS80OS8xMTQ5ZDFkYTM4NDMwN2JjZjA3OWE0NDU3Y2Y5ZGIzMWM3NzFlZTJkYmJmMzY3OTEzNDhlY2RhZGVlNzUxOWRlLzFkZmNjZmQ3ZTA4ZDZkYjQzNjQ5OTViNTM4MjUyYWRmNmViNTIyNjgyMmY3ZDRlY2Q4NjlhZTUxZTVhMjhlNDA%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=MT5Ay9Hyjv0S3X8jLX8xm9nq42nM91E%7E48rkeWqUtB26WGrIDFNwMJOpz1AH2w6wYPYwCyaXyiPL9VfyHgTW17mnE4W9ojVcIc%7EVK-A-PCzawpwFxXQ%7E1OHWpyDXpjaFC7o-9s%7Ec-hQRuiD4CG8i0UhTCw7i3p5dZlh-UbOKoQ7qdANBGww0-iv3nPDLPN1Kye97f2bbQjkkk-eFM2WP9Tw0I-mTQmRMjvV8Fv8oLcB20AedESP9%7EXuK8QAd1s5RUZB1EAN8NrYqcSTPEyeZ21oO01jpOYpvn%7ESieDJrXqW5j%7Ejdckq5iae3%7EruxfZZNi9RvC%7EKArLVPWhnX4YJW%7EQ__&Key-Pair-Id=K3RPWS32NSSJCE
Yntec/Wonderland
Yntec
2023-11-25T17:38:41Z
460
4
diffusers
[ "diffusers", "safetensors", "General Purpose", "Animation", "Fantasy", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-25T17:16:45Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - General Purpose - Animation - Fantasy - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # Wonderland Samples and prompts: ![Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/_1VNfESHf1sWfCJvgKWIA.png) cartoon pretty cute girl, 16k ultra realistic, color high quality, saxophone, tending on artstation, front lighting, focused, extreme details, unreal engine 5, cinematic, masterpiece, art by Peter Mohrbacher, Hajime Sorayama, Moebius cartoon pretty cute girl, 16k ultra realistic, color high quality, harmonica, tending on artstation, front lighting, focused, extreme details, unreal engine 5, cinematic, masterpiece, art by Peter Mohrbacher, Hajime Sorayama, Moebius cartoon pretty cute girl, 16k ultra realistic, color high quality, piano, tending on artstation, front lighting, focused, extreme details, unreal engine 5, cinematic, masterpiece, art by Peter Mohrbacher, Hajime Sorayama, Moebius cartoon pretty cute girl, 16k ultra realistic, color high quality, violin, tending on artstation, front lighting, focused, extreme details, unreal engine 5, cinematic, masterpiece, art by Peter Mohrbacher, Hajime Sorayama, Moebius
TheBloke/MysticFusion-13B-AWQ
TheBloke
2023-11-25T17:34:32Z
10
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "base_model:Walmart-the-bag/MysticFusion-13B", "base_model:quantized:Walmart-the-bag/MysticFusion-13B", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-11-25T17:05:21Z
--- base_model: Walmart-the-bag/MysticFusion-13B inference: false language: - en license: llama2 model_creator: wbag model_name: MysticFusion 13B model_type: llama pipeline_tag: text-generation prompt_template: "### Instruction: \n{prompt}\n### Response:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MysticFusion 13B - AWQ - Model creator: [wbag](https://huggingface.co/Walmart-the-bag) - Original model: [MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- description start --> ## Description This repo contains AWQ model files for [wbag's MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MysticFusion-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF) * [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-Tiefighter ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MysticFusion-13B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.25 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MysticFusion-13B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MysticFusion-13B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/MysticFusion-13B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/MysticFusion-13B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MysticFusion-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/MysticFusion-13B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: wbag's MysticFusion 13B YAML: ``` models: - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.3 - model: NeverSleep/Noromaid-13b-v0.1.1 parameters: weight: 0.5 - model: lmsys/vicuna-13b-v1.5 parameters: weight: 0.2 merge_method: linear dtype: float16 ``` # Prompt Template: ### Alpaca ``` ### Instruction: ### Response: ```
jinfan99/simple_exp
jinfan99
2023-11-25T17:34:28Z
7
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T17:19:38Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: simple_exp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simple_exp This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3011 - Accuracy: 0.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.9475 | 0.97 | 19 | 4.9313 | 0.0162 | | 4.9042 | 2.0 | 39 | 4.8811 | 0.0583 | | 4.8017 | 2.97 | 58 | 4.7814 | 0.2071 | | 4.6283 | 4.0 | 78 | 4.6441 | 0.2589 | | 4.4331 | 4.97 | 97 | 4.5068 | 0.3042 | | 4.2235 | 6.0 | 117 | 4.3708 | 0.3269 | | 4.0763 | 6.97 | 136 | 4.2528 | 0.3657 | | 3.9078 | 8.0 | 156 | 4.1407 | 0.3851 | | 3.7584 | 8.97 | 175 | 4.0365 | 0.4142 | | 3.6051 | 10.0 | 195 | 3.9551 | 0.4045 | | 3.4872 | 10.97 | 214 | 3.8674 | 0.4239 | | 3.3766 | 12.0 | 234 | 3.7926 | 0.4369 | | 3.3067 | 12.97 | 253 | 3.7247 | 0.4531 | | 3.2303 | 14.0 | 273 | 3.6577 | 0.4369 | | 3.0874 | 14.97 | 292 | 3.6127 | 0.4725 | | 3.0064 | 16.0 | 312 | 3.5594 | 0.4854 | | 2.9259 | 16.97 | 331 | 3.5304 | 0.4628 | | 2.9121 | 18.0 | 351 | 3.4865 | 0.4660 | | 2.8135 | 18.97 | 370 | 3.4552 | 0.4725 | | 2.7554 | 20.0 | 390 | 3.4350 | 0.4660 | | 2.749 | 20.97 | 409 | 3.3701 | 0.4790 | | 2.7058 | 22.0 | 429 | 3.3568 | 0.4854 | | 2.6684 | 22.97 | 448 | 3.3298 | 0.5049 | | 2.6722 | 24.0 | 468 | 3.2839 | 0.5049 | | 2.5821 | 24.97 | 487 | 3.2736 | 0.4822 | | 2.5701 | 26.0 | 507 | 3.2917 | 0.4790 | | 2.5679 | 26.97 | 526 | 3.2784 | 0.4984 | | 2.5682 | 28.0 | 546 | 3.3007 | 0.4887 | | 2.517 | 28.97 | 565 | 3.2811 | 0.4790 | | 2.5655 | 29.23 | 570 | 3.3011 | 0.4725 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
Weyaxi/Nebula-7B
Weyaxi
2023-11-25T17:28:24Z
16
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T20:24:47Z
--- license: apache-2.0 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # Nebula-7b Original weights of Nebula-7B. Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). ## Lora Weights You can access original lora weights from here: [PulsarAI/Nebula-7B-Lora](https://huggingface.co/PulsarAI/Nebula-7B-Lora) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.93 | | ARC (25-shot) | 59.3 | | HellaSwag (10-shot) | 83.46 | | MMLU (5-shot) | 57.0 | | TruthfulQA (0-shot) | 45.56 | | Winogrande (5-shot) | 76.4 | | GSM8K (5-shot) | 14.86 | | DROP (3-shot) | 40.96 |
Weyaxi/Samantha-Nebula-7B
Weyaxi
2023-11-25T17:28:08Z
1,519
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-05T12:39:45Z
--- datasets: - garage-bAInd/Open-Platypus language: - en license: apache-2.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # Samantha-Nebula-7B Samantha-Nebula-7B is a merge of [ehartford/samantha-mistral-7b](https://huggingface.co/ehartford/samantha-mistral-7b) and [PulsarAI/Nebula-7B](https://huggingface.co/PulsarAI/Nebula-7B-Lora) # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Samantha-Nebula-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.87 | | ARC (25-shot) | 57.0 | | HellaSwag (10-shot) | 82.25 | | MMLU (5-shot) | 54.21 | | TruthfulQA (0-shot) | 49.58 | | Winogrande (5-shot) | 73.09 | | GSM8K (5-shot) | 11.37 | | DROP (3-shot) | 42.57 |
abhaykumar/sample
abhaykumar
2023-11-25T17:27:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T15:39:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.90 +/- 17.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
kaifahmad/phi-1_5-finetuned-med-text
kaifahmad
2023-11-25T17:21:36Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-11-25T17:21:34Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-finetuned-med-text results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-med-text This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
ullrichx/ppo-LunanLander-v2
ullrichx
2023-11-25T17:14:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T17:13:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.18 +/- 19.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TheBloke/MysticFusion-13B-GGUF
TheBloke
2023-11-25T17:12:24Z
240
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "base_model:Walmart-the-bag/MysticFusion-13B", "base_model:quantized:Walmart-the-bag/MysticFusion-13B", "license:llama2", "region:us" ]
text-generation
2023-11-25T17:05:21Z
--- base_model: Walmart-the-bag/MysticFusion-13B inference: false language: - en license: llama2 model_creator: wbag model_name: MysticFusion 13B model_type: llama pipeline_tag: text-generation prompt_template: "### Instruction: \n{prompt}\n### Response:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MysticFusion 13B - GGUF - Model creator: [wbag](https://huggingface.co/Walmart-the-bag) - Original model: [MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [wbag's MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MysticFusion-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF) * [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-Tiefighter ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mysticfusion-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mysticfusion-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mysticfusion-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mysticfusion-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mysticfusion-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mysticfusion-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mysticfusion-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mysticfusion-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mysticfusion-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mysticfusion-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mysticfusion-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mysticfusion-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MysticFusion-13B-GGUF and below it, a specific filename to download, such as: mysticfusion-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MysticFusion-13B-GGUF mysticfusion-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MysticFusion-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MysticFusion-13B-GGUF mysticfusion-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mysticfusion-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: \n{prompt}\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MysticFusion-13B-GGUF", model_file="mysticfusion-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: wbag's MysticFusion 13B YAML: ``` models: - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.3 - model: NeverSleep/Noromaid-13b-v0.1.1 parameters: weight: 0.5 - model: lmsys/vicuna-13b-v1.5 parameters: weight: 0.2 merge_method: linear dtype: float16 ``` # Prompt Template: ### Alpaca ``` ### Instruction: ### Response: ``` <!-- original-model-card end -->
jinfan99/my_awesome_food_model
jinfan99
2023-11-25T17:11:31Z
7
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-25T14:27:11Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.7733 - Accuracy: 0.1780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.9271 | 0.97 | 19 | 4.8711 | 0.0680 | | 4.7921 | 2.0 | 39 | 4.8045 | 0.1424 | | 4.7011 | 2.92 | 57 | 4.7733 | 0.1780 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
xiaolongbao888/my_awesome_qa_model
xiaolongbao888
2023-11-25T16:51:53Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-11-25T16:43:10Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.1064 | | 2.5316 | 2.0 | 500 | 1.6539 | | 2.5316 | 3.0 | 750 | 1.6095 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
ostapeno/flan-library-for-neo-1B_evol-proute_neo_1B_rouge_replace_finegrained_poly_router
ostapeno
2023-11-25T16:34:29Z
0
0
null
[ "region:us" ]
null
2023-11-24T23:06:27Z
Number of experts present in the library: 20 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ostapeno/flan-library-for-neo-1B_evol=adversarial_qa_dbidaf_answer_the_following_q_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora | | ostapeno/flan-library-for-neo-1B_evol=ai2_arc_ARC_Challenge_1_0_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora | | ostapeno/flan-library-for-neo-1B_evol=dbpedia_14_given_a_choice_of_categories__v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora | | ostapeno/flan-library-for-neo-1B_evol=wiqa_effect_with_string_answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora | | ostapeno/flan-library-for-neo-1B_evol=quartz_read_passage_below_choose_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_ParaphraseRC_answer_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora | | ostapeno/flan-library-for-neo-1B_evol=dream_baseline_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora | | ostapeno/flan-library-for-neo-1B_evol=wiki_hop_original_choose_best_object_interrogative_2_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora | | ostapeno/flan-library-for-neo-1B_evol=dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora | | ostapeno/flan-library-for-neo-1B_evol=web_questions_whats_the_answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_SelfRC_answer_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora | | ostapeno/flan-library-for-neo-1B_evol=social_i_qa_Check_if_a_random_answer_is_valid_or_not_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ostapeno/flan-library-for-neo-1B_evol=quoref_Find_Answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora | | ostapeno/flan-library-for-neo-1B_evol=adversarial_qa_dbidaf_generate_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora | | ostapeno/flan-library-for-neo-1B_evol=cos_e_v1_11_question_description_option_text_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora | | ostapeno/flan-library-for-neo-1B_evol=yelp_polarity_reviews_0_2_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_ParaphraseRC_title_generation_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora | | ostapeno/flan-library-for-neo-1B_evol=wiqa_what_might_be_the_first_step_of_the_process_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora | | ostapeno/flan-library-for-neo-1B_evol=squad_v1_1_3_0_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora | | ostapeno/flan-library-for-neo-1B_evol-proute_neo_1B_rouge_replace_finegrained_poly_router=ai2_arc_ARC_Challenge_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora | Last updated on: 2023-11-25 11:47:38+00:00
pranavdurai/huggy-rl-v1
pranavdurai
2023-11-25T16:17:11Z
17
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-25T16:17:02Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: pranavdurai/huggy-rl-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nrshoudi/Whisper-tiny-Arabic-phoneme
nrshoudi
2023-11-25T16:11:43Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-tiny", "base_model:adapter:openai/whisper-tiny", "region:us" ]
null
2023-11-25T15:09:44Z
--- library_name: peft base_model: openai/whisper-tiny --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
ostapeno/flan-library-for-neo-1B_evol-proute_neo_1B_rouge_replace_coarsegrained_poly_router_dir
ostapeno
2023-11-25T16:10:48Z
0
0
null
[ "region:us" ]
null
2023-11-24T23:03:11Z
Number of experts present in the library: 20 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ostapeno/flan-library-for-neo-1B_evol=adversarial_qa_dbidaf_answer_the_following_q_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora | | ostapeno/flan-library-for-neo-1B_evol=ai2_arc_ARC_Challenge_1_0_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora | | ostapeno/flan-library-for-neo-1B_evol=dbpedia_14_given_a_choice_of_categories__v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora | | ostapeno/flan-library-for-neo-1B_evol=wiqa_effect_with_string_answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora | | ostapeno/flan-library-for-neo-1B_evol=quartz_read_passage_below_choose_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_ParaphraseRC_answer_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora | | ostapeno/flan-library-for-neo-1B_evol=dream_baseline_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora | | ostapeno/flan-library-for-neo-1B_evol=wiki_hop_original_choose_best_object_interrogative_2_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora | | ostapeno/flan-library-for-neo-1B_evol=dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora | | ostapeno/flan-library-for-neo-1B_evol=web_questions_whats_the_answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_SelfRC_answer_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora | | ostapeno/flan-library-for-neo-1B_evol=social_i_qa_Check_if_a_random_answer_is_valid_or_not_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora | | ostapeno/flan-library-for-neo-1B_evol=quoref_Find_Answer_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora | | ostapeno/flan-library-for-neo-1B_evol=adversarial_qa_dbidaf_generate_question_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora | | ostapeno/flan-library-for-neo-1B_evol=cos_e_v1_11_question_description_option_text_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora | | ostapeno/flan-library-for-neo-1B_evol=yelp_polarity_reviews_0_2_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora | | ostapeno/flan-library-for-neo-1B_evol=duorc_ParaphraseRC_title_generation_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora | | ostapeno/flan-library-for-neo-1B_evol=wiqa_what_might_be_the_first_step_of_the_process_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora | | ostapeno/flan-library-for-neo-1B_evol=squad_v1_1_3_0_0_v0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora | | ostapeno/flan-library-for-neo-1B_evol-proute_neo_1B_rouge_replace_coarsegrained_poly_router_dir=web_questions_whats_the_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora | Last updated on: 2023-11-25 16:10:47+00:00
Andrescotton/CHECHU_SE_SACA_FOTOS_EN_LUGARES
Andrescotton
2023-11-25T16:05:33Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:43:10Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: CHECHU_SE_SACA_FOTOS_EN_LUGARES results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CHECHU_SE_SACA_FOTOS_EN_LUGARES This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9507 | 1.0 | 232 | 0.9489 | | 0.85 | 2.0 | 464 | 0.9294 | | 0.7755 | 3.0 | 696 | 0.9305 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
marcoBleike/Taxi-v3
marcoBleike
2023-11-25T16:03:53Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T16:03:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.68 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="marcoBleike/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
zerxoi/ddpm-celebahq-finetuned-butterflies-2epochs
zerxoi
2023-11-25T16:03:49Z
45
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-11-25T16:02:52Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('zerxoi/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
nhormann/bert_adaptation_resenas_de_vinos_2023_11_25_15_31
nhormann
2023-11-25T16:02:28Z
5
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-uncased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-25T16:02:10Z
--- base_model: dccuchile/bert-base-spanish-wwm-uncased tags: - generated_from_trainer model-index: - name: bert_adaptation_resenas_de_vinos_2023_11_25_15_31 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_adaptation_resenas_de_vinos_2023_11_25_15_31 This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9807 | 1.0 | 250 | 3.0434 | | 3.0811 | 2.0 | 500 | 3.1461 | | 2.7704 | 3.0 | 750 | 2.9649 | | 2.6821 | 4.0 | 1000 | 2.8179 | | 2.556 | 5.0 | 1250 | 2.6522 | | 2.3324 | 6.0 | 1500 | 2.7123 | | 2.3137 | 7.0 | 1750 | 2.5994 | | 2.2926 | 8.0 | 2000 | 2.6741 | | 2.1216 | 9.0 | 2250 | 2.6469 | | 2.0317 | 10.0 | 2500 | 2.6205 | | 2.0053 | 11.0 | 2750 | 2.4237 | | 2.0453 | 12.0 | 3000 | 2.5970 | | 1.9702 | 13.0 | 3250 | 2.4548 | | 1.9147 | 14.0 | 3500 | 2.4731 | | 1.9143 | 15.0 | 3750 | 2.4431 | | 1.7803 | 16.0 | 4000 | 2.4247 | | 1.7726 | 17.0 | 4250 | 2.5558 | | 1.7448 | 18.0 | 4500 | 2.5092 | | 1.7008 | 19.0 | 4750 | 2.4883 | | 1.769 | 20.0 | 5000 | 2.4471 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
panigrah/wineberto-labels
panigrah
2023-11-25T15:50:13Z
10
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "wine", "ner", "en", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-11-10T09:45:21Z
--- license: unknown language: - en tags: - wine - ner widget: - text: 'Heitz Cabernet Sauvignon California Napa Valley Napa US' example_title: 'California Cab' --- # Wineberto labels Pretrained model on on wine labels only for named entity recognition that uses bert-base-uncased as the base model. ## Model description ## How to use You can use this model directly for named entity recognition like so ```python >>> from transformers import pipeline >>> ner = pipeline('ner', model='winberto-labels') >>> tokens = ner("Heitz Cabernet Sauvignon California Napa Valley Napa US") >>> for t in toks: >>> print(f"{t['word']}: {t['entity_group']}: {t['score']:.5}") heitz: producer: 0.99758 cabernet: wine: 0.92263 sauvignon: wine: 0.92472 california: region: 0.53502 napa valley: subregion: 0.79638 us: country: 0.93675 ``` ## Training data The BERT model was trained on 50K wine labels derived from https://www.liv-ex.com/wwd/lwin/ and manually annotated to capture the following tokens ``` "1": "B-classification", "2": "B-country", "3": "B-producer", "4": "B-region", "5": "B-subregion", "6": "B-vintage", "7": "B-wine" ``` ## Training procedure ``` model_id = 'bert-base-uncased' arguments = TrainingArguments( evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=5, weight_decay=0.01, ) ... trainer.train() ```
panigrah/wineberto-ner
panigrah
2023-11-25T15:47:11Z
13
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "wine", "ner", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-11-10T09:00:25Z
--- license: unknown pipeline_tag: token-classification tags: - wine - ner widget: - text: 'Heitz Cabernet Sauvignon California Napa Valley Napa US this tremendous 100% varietal wine hails from oakville and was aged over three years in oak. juicy red-cherry fruit and a compelling hint of caramel greet the palate, framed by elegant, fine tannins and a subtle minty tone in the background. balanced and rewarding from start to finish, it has years ahead of it to develop further nuance. enjoy 2022' example_title: 'California Cab' --- # Wineberto ner model Pretrained model on on wine labels and descriptions for named entity recognition that uses bert-base-uncased as the base model. This tries to recognize both the wine label and also description about the wine. <b>The label discovery doesnt work as well as just using the panigrah/winberto-labels model. </b> * Updated to remove bias on position of wine label in the training inputs. * also updated to remove trying to get the wine classification. e.g. Grand Cru etc because training data is not reliable. ## Model description ## How to use You can use this model directly for named entity recognition like so ```python >>> from transformers import pipeline >>> ner = pipeline('ner', model='winberto-ner-uncased') >>> tokens = ner("Heitz Cabernet Sauvignon California Napa Valley Napa US this tremendous 100% varietal wine hails from oakville and was aged over three years in oak. juicy red-cherry fruit and a compelling hint of caramel greet the palate, framed by elegant, fine tannins and a subtle minty tone in the background. balanced and rewarding from start to finish, it has years ahead of it to develop further nuance. enjoy 2022") >>> for t in toks: >>> print(f"{t['word']}: {t['entity_group']}: {t['score']:.5}") heitz: producer: 0.99988 cab: wine: 0.9999 ##ernet sauvignon: wine: 0.95893 california: province: 0.99992 napa valley: region: 0.99991 napa: subregion: 0.99987 us: country: 0.99996 oak: flavor: 0.99992 juicy: mouthfeel: 0.99992 cherry: flavor: 0.99994 fruit: flavor: 0.99994 cara: flavor: 0.99993 ##mel: flavor: 0.99731 mint: flavor: 0.99994 balanced: mouthfeel: 0.99992 ``` ## Training data The BERT model was trained on 20K reviews and wine labels derived from https://huggingface.co/datasets/james-burton/wine_reviews_all_text and manually annotated to capture the following tokens ``` adjective: nice, exciting, strong etc country: countries specified in label or description flavor: fruit, apple, toast, smoke etc grape: Cab, Cabernet Sauvignon, etc mouthfeel: lucious, smooth, textured, rough etc producer: wine maker province, region: province and region of wine - sometimes these get mixed up ``` ## Training procedure ``` model_id = 'bert-base-uncased' arguments = TrainingArguments( evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=5, weight_decay=0.01, ) ... trainer.train() ```
nrshoudi/Whisper-large-Arabic-phoneme
nrshoudi
2023-11-25T15:45:45Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
null
2023-11-25T15:45:41Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: Whisper-large-Arabic-phoneme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper-large-Arabic-phoneme This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0577 | 1.0 | 546 | 0.2082 | | 0.0362 | 2.0 | 1092 | 0.1947 | | 0.0291 | 3.0 | 1638 | 0.1888 | | 0.0159 | 4.0 | 2184 | 0.1927 | | 0.0177 | 5.0 | 2730 | 0.1870 | | 0.0164 | 6.0 | 3276 | 0.1914 | | 0.0111 | 7.0 | 3822 | 0.2031 | | 0.0048 | 8.0 | 4368 | 0.2088 | | 0.0025 | 9.0 | 4914 | 0.2106 | | 0.0014 | 10.0 | 5460 | 0.2133 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Harrynski/Jose-Saramago
Harrynski
2023-11-25T15:43:40Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:43:18Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Jose-Saramago results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jose-Saramago This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7134 | 1.0 | 3000 | 3.5715 | | 3.2698 | 2.0 | 6000 | 3.5495 | | 3.1131 | 3.0 | 9000 | 3.5556 | | 2.8988 | 4.0 | 12000 | 3.5726 | | 3.0743 | 5.0 | 15000 | 3.5831 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
bcecilia1/Martin-Fierro
bcecilia1
2023-11-25T15:24:52Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:24:25Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Martin-Fierro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Martin-Fierro This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0038 | 1.0 | 20 | 5.0091 | | 4.7939 | 2.0 | 40 | 4.9122 | | 4.5079 | 3.0 | 60 | 4.8526 | | 4.394 | 4.0 | 80 | 4.8288 | | 4.3293 | 5.0 | 100 | 4.8217 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
becausecurious/stable-video-diffusion-img2vid-fp16
becausecurious
2023-11-25T15:24:43Z
0
55
null
[ "region:us" ]
null
2023-11-23T18:39:36Z
These are unofficial fp16 versions of * https://huggingface.co/stabilityai/stable-video-diffusion-img2vid * https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt They don't seem to reduce VRAM usage, but can save you data & disk. I couldn't see any difference in generated results compared to the full models (in lowram mode). -------- Follow me for AI tips & tricks and more: * https://becausecurious.me/ * https://x.com/becausecurious/
bmistry4/ppo-PyramidsRND
bmistry4
2023-11-25T15:24:03Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-11-25T15:09:12Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: bmistry4/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cboyallian/Julio-Cortazar
cboyallian
2023-11-25T15:23:58Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:23:35Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Julio-Cortazar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Julio-Cortazar This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7599 | 1.0 | 1416 | 2.7971 | | 2.6592 | 2.0 | 2832 | 2.7505 | | 2.3015 | 3.0 | 4248 | 2.7392 | | 2.2566 | 4.0 | 5664 | 2.7419 | | 2.0833 | 5.0 | 7080 | 2.7471 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
eitansprejer/Julio-Cortazar
eitansprejer
2023-11-25T15:23:14Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:22:49Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Julio-Cortazar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Julio-Cortazar This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2442 | 1.0 | 177 | 2.2206 | | 2.1964 | 2.0 | 354 | 2.1716 | | 2.1064 | 3.0 | 531 | 2.1494 | | 1.9843 | 4.0 | 708 | 2.1374 | | 1.9402 | 5.0 | 885 | 2.1312 | | 1.9309 | 6.0 | 1062 | 2.1264 | | 1.8384 | 7.0 | 1239 | 2.1227 | | 1.7512 | 8.0 | 1416 | 2.1221 | | 1.742 | 9.0 | 1593 | 2.1215 | | 1.8728 | 10.0 | 1770 | 2.1217 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
OmarAmir2001/ppo-LunarLander-v2
OmarAmir2001
2023-11-25T15:17:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-25T15:17:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.38 +/- 22.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
florenciaaltschuler/bert_adaptation_peppa_pig
florenciaaltschuler
2023-11-25T15:17:25Z
6
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-uncased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-25T15:17:08Z
--- base_model: dccuchile/bert-base-spanish-wwm-uncased tags: - generated_from_trainer model-index: - name: bert_adaptation_peppa_pig results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_adaptation_peppa_pig This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9326 | 1.0 | 35 | 2.9913 | | 2.894 | 2.0 | 70 | 2.6525 | | 2.6986 | 3.0 | 105 | 2.4473 | | 2.4514 | 4.0 | 140 | 2.6013 | | 2.3508 | 5.0 | 175 | 2.3010 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
gastonstrizzolo/Martin-Fierro
gastonstrizzolo
2023-11-25T15:16:47Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:16:09Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Martin-Fierro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Martin-Fierro This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.5773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-09 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.6695 | 1.0 | 40 | 5.5774 | | 5.5936 | 2.0 | 80 | 5.5774 | | 5.4914 | 3.0 | 120 | 5.5774 | | 5.6447 | 4.0 | 160 | 5.5774 | | 5.6144 | 5.0 | 200 | 5.5773 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Areeb123/Distilbert_Masked_Language_Model_IMDB
Areeb123
2023-11-25T15:15:23Z
8
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "en", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-25T15:12:04Z
--- license: mit base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Distilbert_Masked_Language_Model_IMDB results: [] datasets: - imdb language: - en metrics: - perplexity --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Distilbert_Masked_Language_Model_IMDB This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results perplexity = 13 ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
gastonstrizzolo/bert_adaptation_martin_fierro
gastonstrizzolo
2023-11-25T15:15:14Z
4
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-uncased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-25T15:13:10Z
--- base_model: dccuchile/bert-base-spanish-wwm-uncased tags: - generated_from_trainer model-index: - name: bert_adaptation_martin_fierro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_adaptation_martin_fierro This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.3158 | 1.0 | 29 | 4.4719 | | 4.1329 | 2.0 | 58 | 4.0855 | | 3.7257 | 3.0 | 87 | 4.1261 | | 3.3603 | 4.0 | 116 | 3.9543 | | 3.1951 | 5.0 | 145 | 4.2631 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
octa-cba/Martin-Fierro
octa-cba
2023-11-25T15:12:42Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:12:19Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Martin-Fierro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Martin-Fierro This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.7499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0365 | 1.0 | 40 | 4.9395 | | 4.5684 | 2.0 | 80 | 4.8270 | | 4.2594 | 3.0 | 120 | 4.7771 | | 4.0649 | 4.0 | 160 | 4.7564 | | 3.9922 | 5.0 | 200 | 4.7499 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
agusbrusco/Martin-Fierro
agusbrusco
2023-11-25T15:07:44Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T15:01:38Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Martin-Fierro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Martin-Fierro This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.7424 | 1.0 | 5 | 5.0220 | | 4.7253 | 2.0 | 10 | 4.5156 | | 4.4643 | 3.0 | 15 | 4.3808 | | 4.3235 | 4.0 | 20 | 4.2740 | | 4.2015 | 5.0 | 25 | 4.1731 | | 4.0779 | 6.0 | 30 | 4.0667 | | 3.9722 | 7.0 | 35 | 4.0160 | | 3.9136 | 8.0 | 40 | 3.9975 | | 3.878 | 9.0 | 45 | 3.9820 | | 3.8465 | 10.0 | 50 | 3.9675 | | 3.8029 | 11.0 | 55 | 3.9552 | | 3.7845 | 12.0 | 60 | 3.9454 | | 3.7639 | 13.0 | 65 | 3.9383 | | 3.7473 | 14.0 | 70 | 3.9337 | | 3.7358 | 15.0 | 75 | 3.9312 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Harrynski/bert_adaptation_peppa_pig
Harrynski
2023-11-25T15:03:36Z
4
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-uncased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-25T15:03:14Z
--- base_model: dccuchile/bert-base-spanish-wwm-uncased tags: - generated_from_trainer model-index: - name: bert_adaptation_peppa_pig results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_adaptation_peppa_pig This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.0491 | 1.0 | 35 | 2.9525 | | 3.062 | 2.0 | 70 | 2.5655 | | 2.3911 | 3.0 | 105 | 2.7133 | | 2.3658 | 4.0 | 140 | 2.2087 | | 2.2161 | 5.0 | 175 | 2.2832 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
zhangjinyang/vit-base-patch16-224-in21k-finetuned-lora-food101
zhangjinyang
2023-11-25T15:03:18Z
6
0
peft
[ "peft", "arxiv:1910.09700", "base_model:google/vit-base-patch16-224-in21k", "base_model:adapter:google/vit-base-patch16-224-in21k", "region:us" ]
null
2023-11-25T14:56:17Z
--- library_name: peft base_model: google/vit-base-patch16-224-in21k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2
Joacopolo/Julio-Cortazar
Joacopolo
2023-11-25T14:59:57Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:DeepESP/gpt2-spanish", "base_model:finetune:DeepESP/gpt2-spanish", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T14:59:29Z
--- license: mit base_model: DeepESP/gpt2-spanish tags: - generated_from_trainer model-index: - name: Julio-Cortazar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Julio-Cortazar This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3794 | 1.0 | 1416 | 2.4410 | | 2.2936 | 2.0 | 2832 | 2.4003 | | 2.0513 | 3.0 | 4248 | 2.3910 | | 2.0359 | 4.0 | 5664 | 2.3917 | | 1.8027 | 5.0 | 7080 | 2.3962 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0