modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
RefalMachine/solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v5
RefalMachine
2024-03-04T16:28:59Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "region:us" ]
null
2024-03-04T16:24:29Z
--- library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24 model-index: - name: solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v3_1.0_512_28_02_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # solar_10.7_darulm_unigram_proj_init_darulm_part2_r128_a512_v3_1.0_512_28_02_24 This model is a fine-tuned version of [outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24](https://huggingface.co/outputs/solar_10.7_darulm_unigram_proj_init_8node_darulm_part1_v3_1.0_512_12_02_24) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2309 - Accuracy: 0.5309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 24 - gradient_accumulation_steps: 5 - total_train_batch_size: 120 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 0.0 | 1 | 2.3534 | 0.5148 | | 2.4427 | 0.01 | 500 | 2.3338 | 0.5155 | | 2.4399 | 0.02 | 1000 | 2.3276 | 0.5164 | | 2.4244 | 0.03 | 1500 | 2.3231 | 0.5169 | | 2.4336 | 0.04 | 2000 | 2.3194 | 0.5177 | | 2.4201 | 0.06 | 2500 | 2.3156 | 0.5180 | | 2.4245 | 0.07 | 3000 | 2.3128 | 0.5185 | | 2.4157 | 0.08 | 3500 | 2.3097 | 0.5187 | | 2.4054 | 0.09 | 4000 | 2.3070 | 0.5194 | | 2.4161 | 0.1 | 4500 | 2.3033 | 0.5197 | | 2.395 | 0.11 | 5000 | 2.3020 | 0.5201 | | 2.4037 | 0.12 | 5500 | 2.3001 | 0.5204 | | 2.4188 | 0.13 | 6000 | 2.2977 | 0.5206 | | 2.406 | 0.15 | 6500 | 2.2961 | 0.5208 | | 2.4022 | 0.16 | 7000 | 2.2943 | 0.5210 | | 2.3952 | 0.17 | 7500 | 2.2926 | 0.5217 | | 2.394 | 0.18 | 8000 | 2.2909 | 0.5217 | | 2.3828 | 0.19 | 8500 | 2.2891 | 0.5218 | | 2.3903 | 0.2 | 9000 | 2.2882 | 0.5223 | | 2.3943 | 0.21 | 9500 | 2.2861 | 0.5224 | | 2.3944 | 0.22 | 10000 | 2.2851 | 0.5224 | | 2.3872 | 0.23 | 10500 | 2.2841 | 0.5227 | | 2.381 | 0.25 | 11000 | 2.2820 | 0.5228 | | 2.3832 | 0.26 | 11500 | 2.2798 | 0.5232 | | 2.3813 | 0.27 | 12000 | 2.2793 | 0.5237 | | 2.3715 | 0.28 | 12500 | 2.2779 | 0.5241 | | 2.3898 | 0.29 | 13000 | 2.2764 | 0.5240 | | 2.3717 | 0.3 | 13500 | 2.2757 | 0.5240 | | 2.3745 | 0.31 | 14000 | 2.2742 | 0.5244 | | 2.3657 | 0.32 | 14500 | 2.2732 | 0.5244 | | 2.3782 | 0.34 | 15000 | 2.2715 | 0.5247 | | 2.3761 | 0.35 | 15500 | 2.2706 | 0.5247 | | 2.3827 | 0.36 | 16000 | 2.2692 | 0.5249 | | 2.3659 | 0.37 | 16500 | 2.2678 | 0.5251 | | 2.3551 | 0.38 | 17000 | 2.2674 | 0.5252 | | 2.3605 | 0.39 | 17500 | 2.2662 | 0.5255 | | 2.3579 | 0.4 | 18000 | 2.2654 | 0.5256 | | 2.361 | 0.41 | 18500 | 2.2642 | 0.5257 | | 2.3632 | 0.42 | 19000 | 2.2652 | 0.5254 | | 2.3409 | 0.44 | 19500 | 2.2625 | 0.5261 | | 2.3546 | 0.45 | 20000 | 2.2631 | 0.5259 | | 2.361 | 0.46 | 20500 | 2.2611 | 0.5264 | | 2.355 | 0.47 | 21000 | 2.2598 | 0.5264 | | 2.3599 | 0.48 | 21500 | 2.2588 | 0.5265 | | 2.3554 | 0.49 | 22000 | 2.2583 | 0.5265 | | 2.3552 | 0.5 | 22500 | 2.2571 | 0.5268 | | 2.3574 | 0.51 | 23000 | 2.2565 | 0.5268 | | 2.3527 | 0.53 | 23500 | 2.2557 | 0.5272 | | 2.3574 | 0.54 | 24000 | 2.2548 | 0.5272 | | 2.3395 | 0.55 | 24500 | 2.2534 | 0.5274 | | 2.3517 | 0.56 | 25000 | 2.2531 | 0.5272 | | 2.346 | 0.57 | 25500 | 2.2521 | 0.5275 | | 2.3469 | 0.58 | 26000 | 2.2515 | 0.5275 | | 2.3451 | 0.59 | 26500 | 2.2509 | 0.5278 | | 2.3373 | 0.6 | 27000 | 2.2501 | 0.5277 | | 2.3512 | 0.61 | 27500 | 2.2493 | 0.5281 | | 2.3351 | 0.63 | 28000 | 2.2485 | 0.5282 | | 2.3431 | 0.64 | 28500 | 2.2476 | 0.5282 | | 2.3399 | 0.65 | 29000 | 2.2463 | 0.5283 | | 2.3376 | 0.66 | 29500 | 2.2463 | 0.5284 | | 2.3574 | 0.67 | 30000 | 2.2456 | 0.5285 | | 2.3312 | 0.68 | 30500 | 2.2447 | 0.5289 | | 2.3442 | 0.69 | 31000 | 2.2442 | 0.5288 | | 2.338 | 0.7 | 31500 | 2.2434 | 0.5289 | | 2.3345 | 0.72 | 32000 | 2.2433 | 0.5291 | | 2.3314 | 0.73 | 32500 | 2.2420 | 0.5292 | | 2.326 | 0.74 | 33000 | 2.2414 | 0.5293 | | 2.3247 | 0.75 | 33500 | 2.2409 | 0.5295 | | 2.3363 | 0.76 | 34000 | 2.2403 | 0.5296 | | 2.3409 | 0.77 | 34500 | 2.2395 | 0.5297 | | 2.335 | 0.78 | 35000 | 2.2391 | 0.5295 | | 2.3194 | 0.79 | 35500 | 2.2383 | 0.5298 | | 2.3367 | 0.8 | 36000 | 2.2379 | 0.5301 | | 2.3286 | 0.82 | 36500 | 2.2372 | 0.5301 | | 2.3225 | 0.83 | 37000 | 2.2366 | 0.5302 | | 2.3198 | 0.84 | 37500 | 2.2363 | 0.5301 | | 2.3274 | 0.85 | 38000 | 2.2355 | 0.5301 | | 2.3195 | 0.86 | 38500 | 2.2349 | 0.5303 | | 2.3418 | 0.87 | 39000 | 2.2344 | 0.5303 | | 2.323 | 0.88 | 39500 | 2.2340 | 0.5304 | | 2.3211 | 0.89 | 40000 | 2.2336 | 0.5304 | | 2.3332 | 0.91 | 40500 | 2.2334 | 0.5306 | | 2.3226 | 0.92 | 41000 | 2.2329 | 0.5307 | | 2.3329 | 0.93 | 41500 | 2.2325 | 0.5308 | | 2.3172 | 0.94 | 42000 | 2.2321 | 0.5307 | | 2.3231 | 0.95 | 42500 | 2.2319 | 0.5308 | | 2.314 | 0.96 | 43000 | 2.2316 | 0.5309 | | 2.3205 | 0.97 | 43500 | 2.2315 | 0.5308 | | 2.3208 | 0.98 | 44000 | 2.2312 | 0.5309 | | 2.3228 | 0.99 | 44500 | 2.2310 | 0.5309 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.2 ## Training procedure ### Framework versions - PEFT 0.6.0
KnutJaegersberg/Qwen-1_8B-Chat-llama
KnutJaegersberg
2024-03-04T16:28:40Z
1,361
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-19T07:19:09Z
--- license: other license_name: qwen license_link: LICENSE model-index: - name: Qwen-1_8B-Chat-llama results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 36.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 54.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 44.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.7 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 58.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 19.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama name: Open LLM Leaderboard --- Their non-commercial research license applies. I used this script to make the model and used the tokenizer of CausalLM, as suggested in the comments of the script. https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Qwen-1_8B-Chat-llama) | Metric |Value| |---------------------------------|----:| |Avg. |42.94| |AI2 Reasoning Challenge (25-Shot)|36.95| |HellaSwag (10-Shot) |54.34| |MMLU (5-Shot) |44.55| |TruthfulQA (0-shot) |43.70| |Winogrande (5-shot) |58.88| |GSM8k (5-shot) |19.26|
KnutJaegersberg/Deacon-1b
KnutJaegersberg
2024-03-04T16:28:01Z
1,444
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-03T05:59:03Z
--- license: cc-by-nc-4.0 model-index: - name: Deacon-1b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 32.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 58.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 24.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 35.05 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 59.59 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deacon-1b name: Open LLM Leaderboard --- Base model is appvoid/palmer-001, fine tuned for 3 epochs with Neftune. Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### Instruction: How do you fine tune a large language model? ### Response: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b) | Metric |Value| |---------------------------------|----:| |Avg. |35.21| |AI2 Reasoning Challenge (25-Shot)|32.42| |HellaSwag (10-Shot) |58.62| |MMLU (5-Shot) |24.89| |TruthfulQA (0-shot) |35.05| |Winogrande (5-shot) |59.59| |GSM8k (5-shot) | 0.68|
KnutJaegersberg/falcon-1b-t-sft
KnutJaegersberg
2024-03-04T16:27:44Z
2,484
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "dataset:KnutJaegersberg/trilobite", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-04T06:19:03Z
--- license: cc-by-nc-4.0 datasets: - KnutJaegersberg/trilobite model-index: - name: falcon-1b-t-sft results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 32.94 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 57.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.49 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 55.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/falcon-1b-t-sft name: Open LLM Leaderboard --- Made for the purpose of comparison with the tinyllama model. 3 epochs, neftune on trilobite. Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### Instruction: How do you fine tune a large language model? ### Response: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__falcon-1b-t-sft) | Metric |Value| |---------------------------------|----:| |Avg. |35.02| |AI2 Reasoning Challenge (25-Shot)|32.94| |HellaSwag (10-Shot) |57.24| |MMLU (5-Shot) |25.26| |TruthfulQA (0-shot) |38.49| |Winogrande (5-shot) |55.88| |GSM8k (5-shot) | 0.30|
KnutJaegersberg/internlm-20b-llama
KnutJaegersberg
2024-03-04T16:27:23Z
1,377
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T08:10:53Z
--- license: other license_name: internlm license_link: LICENSE pipeline_tag: text-generation model-index: - name: internlm-20b-llama results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.59 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.71 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 51.1 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama name: Open LLM Leaderboard --- Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected]. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama) | Metric |Value| |---------------------------------|----:| |Avg. |65.09| |AI2 Reasoning Challenge (25-Shot)|61.35| |HellaSwag (10-Shot) |82.08| |MMLU (5-Shot) |61.59| |TruthfulQA (0-shot) |57.71| |Winogrande (5-shot) |76.72| |GSM8k (5-shot) |51.10|
KnutJaegersberg/Deita-1_8B
KnutJaegersberg
2024-03-04T16:27:06Z
1,365
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T14:54:13Z
--- license: other license_name: qwen license_link: LICENSE model-index: - name: Deita-1_8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 36.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 45.62 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.02 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 59.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 15.62 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-1_8B name: Open LLM Leaderboard --- Their noncommercial license applies. Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### User: How do you fine tune a large language model? ### Assistant: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-1_8B) | Metric |Value| |---------------------------------|----:| |Avg. |42.96| |AI2 Reasoning Challenge (25-Shot)|36.52| |HellaSwag (10-Shot) |60.63| |MMLU (5-Shot) |45.62| |TruthfulQA (0-shot) |40.02| |Winogrande (5-shot) |59.35| |GSM8k (5-shot) |15.62|
DimiPaparas/dqn-SpaceInvadersNoFrameskip-v4
DimiPaparas
2024-03-04T16:24:09Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T16:23:39Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 517.00 +/- 88.58 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DimiPaparas -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DimiPaparas -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DimiPaparas ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
ExusBurn/CartPole
ExusBurn
2024-03-04T16:23:01Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T16:07:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 492.28 +/- 54.04 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
pszemraj/distilgpt2-HC3
pszemraj
2024-03-04T16:22:26Z
1,496
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "chatgpt", "HC3", "en", "dataset:pszemraj/HC3-textgen-qa", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-25T21:51:31Z
--- language: - en license: apache-2.0 library_name: transformers tags: - generated_from_trainer - chatgpt - HC3 datasets: - pszemraj/HC3-textgen-qa metrics: - accuracy widget: - text: 'Review: Best cast iron skillet you will ever buy. Is this review positive or negative? <answer>' example_title: Sentiment analysis - text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because <answer> example_title: Coreference resolution - text: 'On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book. Here''s the puzzle, <answer>' example_title: Logic puzzles - text: The two men running to become New York City's next mayor will face off in their first debate Wednesday night <answer> example_title: Reading comprehension - text: Is it true that if I have five 5-hour energy drinks in a single 24-hour period, I get 25 hours of energy and spontaneously explode? <answer> example_title: 5 hour energy - text: what happens if you train a smaller model on a dataset of reinforcement-learning optimized model responses? <answer> example_title: deep learning advice inference: parameters: temperature: 0.6 max_length: 96 no_repeat_ngram_size: 4 repetition_penalty: 1.5 eta_cutoff: 0.0008 renormalize_logits: true pipeline_tag: text-generation model-index: - name: distilgpt2-HC3 results: [] --- # distilgpt2-HC3 > what happens if you train a smaller model on a dataset of chatGPT responses? This happens. ![example](https://i.imgur.com/i5snxQJ.png) ## Model description This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the "chatgpt answers" column of the `Hello-SimpleAI/HC3` dataset. It achieves the following results on the evaluation set: - Loss: 1.9983 - Accuracy: 0.5441 ## Intended uses & limitations Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time. ## Training and evaluation data Modifications made w.r.t. original dataset: - drop all rows that did not have a chatGPT answer - if a row (_i.e. ELI5 question, etc_) had more than one response (_from chatGPT_), randomly choose one of the responses as the answer to the question - the "question" and chatGPT answer were combined into a single string for that row as follows: `QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>` - `<answer>` and `<end_answer>` serve as added tokens to help the model learn "turns" in the conversation ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 3208 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 6.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2485 | 0.98 | 41 | 2.1457 | 0.5158 | | 2.0757 | 1.98 | 82 | 2.0584 | 0.5304 | | 1.966 | 2.98 | 123 | 2.0210 | 0.5376 | | 1.8602 | 3.98 | 164 | 2.0012 | 0.5422 | | 1.8089 | 4.98 | 205 | 1.9977 | 0.5436 | | 1.7698 | 5.98 | 246 | 1.9983 | 0.5441 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.6.1 - Tokenizers 0.12.1 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3) | Metric |Value| |---------------------------------|----:| |Avg. |28.18| |AI2 Reasoning Challenge (25-Shot)|24.66| |HellaSwag (10-Shot) |27.99| |MMLU (5-Shot) |23.95| |TruthfulQA (0-shot) |42.10| |Winogrande (5-shot) |50.36| |GSM8k (5-shot) | 0.00|
giraffe176/Open_Neural_Monarch_Maidv0.1
giraffe176
2024-03-04T16:22:08Z
47
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Intel/neural-chat-7b-v3-1", "base_model:merge:Intel/neural-chat-7b-v3-1", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:mlabonne/Monarch-7B", "base_model:merge:mlabonne/Monarch-7B", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:merge:teknium/OpenHermes-2.5-Mistral-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T04:21:06Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - Intel/neural-chat-7b-v3-1 - NeverSleep/Noromaid-7B-0.4-DPO - teknium/OpenHermes-2.5-Mistral-7B - mistralai/Mistral-7B-v0.1 - mlabonne/Monarch-7B model-index: - name: Open_Neural_Monarch_Maidv0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.94 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1 name: Open LLM Leaderboard --- # Open_Neural_Monarch_Maidv0.1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) * [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: mlabonne/Monarch-7B parameters: density: 0.53 weight: 0.15 - model: NeverSleep/Noromaid-7B-0.4-DPO parameters: density: 0.53 weight: 0.3 - model: teknium/OpenHermes-2.5-Mistral-7B parameters: density: 0.53 weight: 0.3 - model: Intel/neural-chat-7b-v3-1 parameters: density: 0.53 weight: 0.25 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Neural_Monarch_Maidv0.1) | Metric |Value| |---------------------------------|----:| |Avg. |69.28| |AI2 Reasoning Challenge (25-Shot)|67.66| |HellaSwag (10-Shot) |85.94| |MMLU (5-Shot) |65.02| |TruthfulQA (0-shot) |56.39| |Winogrande (5-shot) |79.32| |GSM8k (5-shot) |61.33|
giraffe176/Open_Hermes_Orca_Mistral-7B
giraffe176
2024-03-04T16:22:06Z
55
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-11T05:00:16Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: [] model-index: - name: Open_Hermes_Orca_Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.68 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.34 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 56.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Hermes_Orca_Mistral-7B name: Open LLM Leaderboard --- # .samplemodel This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using teknium/OpenHermes-2.5-Mistral-7B as a base. ### Models Merged The following models were included in the merge: * Open-Orca/Mistral-7B-OpenOrca ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: teknium/OpenHermes-2.5-Mistral-7B parameters: weight: 1.0 - model: Open-Orca/Mistral-7B-OpenOrca parameters: weight: 0.6 merge_method: task_arithmetic base_model: teknium/OpenHermes-2.5-Mistral-7B dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Hermes_Orca_Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |66.87| |AI2 Reasoning Challenge (25-Shot)|64.68| |HellaSwag (10-Shot) |84.63| |MMLU (5-Shot) |63.93| |TruthfulQA (0-shot) |53.34| |Winogrande (5-shot) |78.45| |GSM8k (5-shot) |56.18|
giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
giraffe176
2024-03-04T16:22:03Z
52
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:Open-Orca/Mistral-7B-OpenOrca", "base_model:merge:Open-Orca/Mistral-7B-OpenOrca", "base_model:cognitivecomputations/samantha-1.1-westlake-7b", "base_model:merge:cognitivecomputations/samantha-1.1-westlake-7b", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:merge:teknium/OpenHermes-2.5-Mistral-7B", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-18T03:02:27Z
--- license: cc-by-4.0 library_name: transformers tags: - mergekit - merge base_model: - teknium/OpenHermes-2.5-Mistral-7B - NeverSleep/Noromaid-7B-0.4-DPO - mistralai/Mistral-7B-v0.1 - Open-Orca/Mistral-7B-OpenOrca - cognitivecomputations/samantha-1.1-westlake-7b model-index: - name: Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 name: Open LLM Leaderboard --- # Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) * [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) * [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: cognitivecomputations/samantha-1.1-westlake-7b parameters: density: 0.59 weight: 0.15 - model: NeverSleep/Noromaid-7B-0.4-DPO parameters: density: 0.59 weight: 0.3 - model: teknium/OpenHermes-2.5-Mistral-7B parameters: density: 0.49 weight: 0.3 - model: Open-Orca/Mistral-7B-OpenOrca parameters: density: 0.49 weight: 0.25 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1) | Metric |Value| |---------------------------------|----:| |Avg. |67.80| |AI2 Reasoning Challenge (25-Shot)|65.87| |HellaSwag (10-Shot) |85.48| |MMLU (5-Shot) |64.50| |TruthfulQA (0-shot) |51.92| |Winogrande (5-shot) |80.35| |GSM8k (5-shot) |58.68|
BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
BEE-spoke-data
2024-03-04T16:21:47Z
35
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol_llama", "llama2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T05:52:24Z
--- license: apache-2.0 tags: - generated_from_trainer - smol_llama - llama2 metrics: - accuracy base_model: BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v12-minipile inference: parameters: max_new_tokens: 64 do_sample: true temperature: 0.8 repetition_penalty: 1.15 no_repeat_ngram_size: 4 eta_cutoff: 0.001 renormalize_logits: true widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: Kennesaw State University is a public example_title: Kennesaw State University - text: Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded example_title: Bungie - text: The Mona Lisa is a world-renowned painting created by example_title: Mona Lisa - text: The Harry Potter series, written by J.K. Rowling, begins with the book titled example_title: Harry Potter Series - text: 'Question: I have cities, but no houses. I have mountains, but no trees. I have water, but no fish. What am I? Answer:' example_title: Riddle - text: The process of photosynthesis involves the conversion of example_title: Photosynthesis - text: Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot example_title: Story Continuation - text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and another train leaves Station B at 10:00 AM and travels at 80 mph, when will they meet if the distance between the stations is 300 miles? To determine' example_title: Math Problem - text: In the context of computer programming, an algorithm is example_title: Algorithm Definition pipeline_tag: text-generation model-index: - name: NanoLlama-GQA-L10-A32_KV8-v13-KI results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 23.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 29.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.77 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI name: Open LLM Leaderboard --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI > note that training still WIP This model is a fine-tuned version of [BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v12-minipile](https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v12-minipile) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5937 - Accuracy: 0.4948 ## Training and evaluation data KI dataset `hf-causal-experimental (pretrained=BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI,revision=main,trust_remote_code=True,dtype='float'), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8` | Task |Version| Metric | Value | |Stderr| |--------------|------:|--------|------:|---|-----:| |arc_easy | 0|acc | 0.4322|± |0.0102| | | |acc_norm| 0.3960|± |0.0100| |boolq | 1|acc | 0.6196|± |0.0085| |lambada_openai| 0|ppl |61.6595|± |2.4362| | | |acc | 0.2779|± |0.0062| |openbookqa | 0|acc | 0.1540|± |0.0162| | | |acc_norm| 0.2840|± |0.0202| |piqa | 0|acc | 0.6028|± |0.0114| | | |acc_norm| 0.6028|± |0.0114| |winogrande | 0|acc | 0.5193|± |0.0140| ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 8 - eval_batch_size: 4 - seed: 2280 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5744 | 0.08 | 200 | 2.7138 | 0.4776 | | 2.5387 | 0.16 | 400 | 2.6713 | 0.4836 | | 2.4718 | 0.23 | 600 | 2.6462 | 0.4873 | | 2.4681 | 0.31 | 800 | 2.6328 | 0.4892 | | 2.5351 | 0.39 | 1000 | 2.6227 | 0.4908 | | 2.5316 | 0.47 | 1200 | 2.6159 | 0.4914 | | 2.527 | 0.54 | 1400 | 2.6103 | 0.4921 | | 2.4838 | 0.62 | 1600 | 2.6058 | 0.4930 | | 2.4483 | 0.7 | 1800 | 2.6024 | 0.4934 | | 2.426 | 0.78 | 2000 | 2.5998 | 0.4937 | | 2.4685 | 0.86 | 2200 | 2.5961 | 0.4944 | | 2.4473 | 0.93 | 2400 | 2.5937 | 0.4948 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0 - Datasets 2.15.0 - Tokenizers 0.15.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__NanoLlama-GQA-L10-A32_KV8-v13-KI) | Metric |Value| |---------------------------------|----:| |Avg. |29.23| |AI2 Reasoning Challenge (25-Shot)|23.81| |HellaSwag (10-Shot) |29.39| |MMLU (5-Shot) |25.37| |TruthfulQA (0-shot) |44.77| |Winogrande (5-shot) |51.14| |GSM8k (5-shot) | 0.91|
beberik/Lonepino-11B
beberik
2024-03-04T16:17:29Z
1,382
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T23:46:43Z
--- license: cc-by-nc-4.0 tags: - merge model-index: - name: Lonepino-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.57 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.76 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.45 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B name: Open LLM Leaderboard --- ## Description This repo contains bf16 files of Lonepino-11B. Just a normal model. ## Model used - [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) - [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2) - [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) - [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) ## The secret sauce neural-maid-11B: ``` slices: - sources: - model: Intel/neural-chat-7b-v3-3-Slerp layer_range: [0, 24] - sources: - model: NeverSleep/Noromaid-7b-v0.2 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` loyal-PiVoT-11B: ``` slices: - sources: - model: chargoddard/loyal-piano-m7-cdpo layer_range: [0, 24] - sources: - model: maywell/PiVoT-0.1-Starling-LM-RP layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Lonepino-11B: ``` slices: - sources: - model: "./neural-maid-11B" layer_range: [0, 48] - model: "./loyal-PiVoT-11B" layer_range: [0, 48] merge_method: slerp base_model: "./neural-maid-11B" parameters: t: - value: 0.4 dtype: bfloat16 ``` ## Prompt template Alpaca. Or chatml. Or any you like. =w= I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Lonepino-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.10| |AI2 Reasoning Challenge (25-Shot)|68.26| |HellaSwag (10-Shot) |84.57| |MMLU (5-Shot) |63.76| |TruthfulQA (0-shot) |63.45| |Winogrande (5-shot) |78.93| |GSM8k (5-shot) |61.64|
beberik/Nyxene-v3-11B
beberik
2024-03-04T16:16:13Z
1,403
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-12T22:45:39Z
--- license: cc-by-nc-4.0 tags: - merge model-index: - name: Nyxene-v3-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.91 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard --- ## Description This repo contains bf16 files of Nyxene-v1-11B. Just new version with some new things. ## Model used - [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) - [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) - [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) - [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) ## Prompt template Just use chatml. ## The secret sauce go-bruins-loyal-piano-11B : ``` slices: - sources: - model: rwitz/go-bruins-v2 layer_range: [0, 24] - sources: - model: chargoddard/loyal-piano-m7-cdpo layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` neural-marcoroni-11B : ``` slices: - sources: - model: AIDC-ai-business/Marcoroni-7B-v3 layer_range: [0, 24] - sources: - model: Intel/neural-chat-7b-v3-3-Slerp layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Nyxene-11B : ``` slices: - sources: - model: "./go-bruins-loyal-piano-11B" layer_range: [0, 48] - model: "./neural-marcoroni-11B" layer_range: [0, 48] merge_method: slerp base_model: "./go-bruins-loyal-piano-11B" parameters: t: - filter: lm_head value: [0.5] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.5] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v3-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.72| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |85.33| |MMLU (5-Shot) |64.75| |TruthfulQA (0-shot) |60.91| |Winogrande (5-shot) |80.19| |GSM8k (5-shot) |63.53|
beberik/Nyxene-11B
beberik
2024-03-04T16:15:40Z
1,405
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T17:11:58Z
--- license: cc-by-nc-4.0 tags: - merge model-index: - name: Nyxene-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.5 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 51.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B name: Open LLM Leaderboard --- ## Description This repo contains bf16 files of Nyxene-11B. Like [OmniMix](https://huggingface.co/Undi95/Mistral-11B-OmniMix) but with new models. ## Model used - [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) - [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) - [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) ## Prompt template The best one after further testing is this one: ``` <|system|> Below is an instruction that describes a task. Write a response that appropriately completes the request. <|user|> {prompt} <|assistant|> ``` ## The secret sauce dolphin-juanako-11B : ``` slices: - sources: - model: fblgit/juanako-7b-UNA layer_range: [0, 24] - sources: - model: ehartford/dolphin-2.1-mistral-7b layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Starling-NeuralHermes-11B : ``` slices: - sources: - model: berkeley-nest/Starling-LM-7B-alpha layer_range: [0, 24] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Nyxene-11B : ``` slices: - sources: - model: dolphin-juanako-11B layer_range: [0, 48] - model: Starling-NeuralHermes-11B layer_range: [0, 48] merge_method: slerp base_model: dolphin-juanako-11B parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-11B) | Metric |Value| |---------------------------------|----:| |Avg. |67.72| |AI2 Reasoning Challenge (25-Shot)|68.34| |HellaSwag (10-Shot) |84.54| |MMLU (5-Shot) |65.09| |TruthfulQA (0-shot) |57.50| |Winogrande (5-shot) |79.08| |GSM8k (5-shot) |51.78|
BEE-spoke-data/smol_llama-220M-open_instruct
BEE-spoke-data
2024-03-04T16:13:28Z
65
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:VMware/open-instruct", "base_model:BEE-spoke-data/smol_llama-220M-GQA", "base_model:finetune:BEE-spoke-data/smol_llama-220M-GQA", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-30T11:56:14Z
--- license: apache-2.0 datasets: - VMware/open-instruct base_model: BEE-spoke-data/smol_llama-220M-GQA inference: parameters: do_sample: true renormalize_logits: true temperature: 0.25 top_p: 0.95 top_k: 50 min_new_tokens: 2 max_new_tokens: 96 repetition_penalty: 1.04 no_repeat_ngram_size: 6 epsilon_cutoff: 0.0006 widget: - text: "Below is an instruction that describes a task, paired with an input that\ \ provides further context. Write a response that appropriately completes the\ \ request. \n \n### Instruction: \n \nWrite an ode to Chipotle burritos.\ \ \n \n### Response: \n" example_title: burritos model-index: - name: smol_llama-220M-open_instruct results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 25.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 29.71 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.06 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 50.28 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct name: Open LLM Leaderboard --- # BEE-spoke-data/smol_llama-220M-open_instruct > Please note that this is an experiment, and the model has limitations because it is smol. prompt format is alpaca. ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming. ### Response: ``` This was **not** trained using a separate 'inputs' field (as `VMware/open-instruct` doesn't use one). ## Example Output on the text above ^. The inference API is set to sample with low temp so you should see (_at least slightly_) different generations each time. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/MdOB7TD5UosPGZvdZWG0I.png) Note that the inference API parameters used here are an initial educated guess, and may be updated over time: ```yml inference: parameters: do_sample: true renormalize_logits: true temperature: 0.25 top_p: 0.95 top_k: 50 min_new_tokens: 2 max_new_tokens: 96 repetition_penalty: 1.04 no_repeat_ngram_size: 6 epsilon_cutoff: 0.0006 ``` Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params! ## Data This was trained on `VMware/open-instruct` so do whatever you want, provided it falls under the base apache-2.0 license :) --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-open_instruct) | Metric |Value| |---------------------------------|----:| |Avg. |29.19| |AI2 Reasoning Challenge (25-Shot)|25.00| |HellaSwag (10-Shot) |29.71| |MMLU (5-Shot) |26.11| |TruthfulQA (0-shot) |44.06| |Winogrande (5-shot) |50.28| |GSM8k (5-shot) | 0.00|
MSLars/de_longformer_abstr_summ
MSLars
2024-03-04T16:09:45Z
11
0
transformers
[ "transformers", "safetensors", "longformer", "token-classification", "generated_from_trainer", "base_model:LennartKeller/longformer-gottbert-base-8192-aw512", "base_model:finetune:LennartKeller/longformer-gottbert-base-8192-aw512", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-02T06:40:10Z
--- base_model: LennartKeller/longformer-gottbert-base-8192-aw512 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: de_longformer_abstr_summ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # de_longformer_abstr_summ This model is a fine-tuned version of [LennartKeller/longformer-gottbert-base-8192-aw512](https://huggingface.co/LennartKeller/longformer-gottbert-base-8192-aw512) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2916 - Precision: 0.2656 - Recall: 0.2673 - F1: 0.2665 - Accuracy: 0.8948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2741 | 1.0 | 1171 | 0.2860 | 0.0914 | 0.0307 | 0.0459 | 0.8979 | | 0.2474 | 2.0 | 2342 | 0.2694 | 0.2918 | 0.2508 | 0.2697 | 0.8982 | | 0.2074 | 3.0 | 3513 | 0.2916 | 0.2656 | 0.2673 | 0.2665 | 0.8948 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
cognitivecomputations/Wizard-Vicuna-7B-Uncensored
cognitivecomputations
2024-03-04T16:04:55Z
555
94
transformers
[ "transformers", "pytorch", "llama", "text-generation", "uncensored", "en", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-18T01:47:34Z
--- language: - en license: other tags: - uncensored datasets: - ehartford/wizard_vicuna_70k_unfiltered model-index: - name: Wizard-Vicuna-7B-Uncensored results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 53.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 78.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 37.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.48 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 4.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/Wizard-Vicuna-7B-Uncensored name: Open LLM Leaderboard --- This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__Wizard-Vicuna-7B-Uncensored) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.77 | | ARC (25-shot) | 53.41 | | HellaSwag (10-shot) | 78.85 | | MMLU (5-shot) | 37.09 | | TruthfulQA (0-shot) | 43.48 | | Winogrande (5-shot) | 72.22 | | GSM8K (5-shot) | 4.55 | | DROP (3-shot) | 23.8 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__Wizard-Vicuna-7B-Uncensored) | Metric |Value| |---------------------------------|----:| |Avg. |48.27| |AI2 Reasoning Challenge (25-Shot)|53.41| |HellaSwag (10-Shot) |78.85| |MMLU (5-Shot) |37.09| |TruthfulQA (0-shot) |43.48| |Winogrande (5-shot) |72.22| |GSM8k (5-shot) | 4.55|
cognitivecomputations/dolphin-2.0-mistral-7b
cognitivecomputations
2024-03-04T16:04:06Z
266
131
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-02T21:56:22Z
--- language: - en license: apache-2.0 datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 model-index: - name: dolphin-2.0-mistral-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.09 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.0-mistral-7b name: Open LLM Leaderboard --- Dolphin 2.0 🐬 https://erichartford.com/dolphin Dolphin-2.0-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/). This model is based on mistralAI, so it is suitable for commercial or non-commercial use. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. ## Training It took 48 hours to train 10 epochs on 4x A100s. Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> ``` Example: ``` <|im_start|>system you are an expert dolphin trainer<|im_end|> <|im_start|>user What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|> ``` ## Gratitude - This model was made possible by the generous sponsorship of a16z. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to WingLian, and TheBloke for helpful advice - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xnz5M1lYd4oGVATSDRkQ-.png) [Buy me a coffee](https://www.buymeacoffee.com/ehartford) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.0-mistral-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 55.85 | | ARC (25-shot) | 59.22 | | HellaSwag (10-shot) | 80.26 | | MMLU (5-shot) | 56.9 | | TruthfulQA (0-shot) | 61.09 | | Winogrande (5-shot) | 75.37 | | GSM8K (5-shot) | 18.65 | | DROP (3-shot) | 39.49 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.0-mistral-7b) | Metric |Value| |---------------------------------|----:| |Avg. |58.58| |AI2 Reasoning Challenge (25-Shot)|59.22| |HellaSwag (10-Shot) |80.26| |MMLU (5-Shot) |56.90| |TruthfulQA (0-shot) |61.09| |Winogrande (5-shot) |75.37| |GSM8k (5-shot) |18.65|
cognitivecomputations/WizardLM-30B-Uncensored
cognitivecomputations
2024-03-04T16:03:43Z
1,822
138
transformers
[ "transformers", "pytorch", "llama", "text-generation", "uncensored", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-22T11:53:45Z
--- license: other tags: - uncensored datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered model-index: - name: WizardLM-30B-Uncensored results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.57 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 12.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard --- This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.32 | | ARC (25-shot) | 60.24 | | HellaSwag (10-shot) | 82.93 | | MMLU (5-shot) | 56.8 | | TruthfulQA (0-shot) | 51.57 | | Winogrande (5-shot) | 74.35 | | GSM8K (5-shot) | 12.89 | | DROP (3-shot) | 27.45 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored) | Metric |Value| |---------------------------------|----:| |Avg. |56.46| |AI2 Reasoning Challenge (25-Shot)|60.24| |HellaSwag (10-Shot) |82.93| |MMLU (5-Shot) |56.80| |TruthfulQA (0-shot) |51.57| |Winogrande (5-shot) |74.35| |GSM8k (5-shot) |12.89|
cognitivecomputations/WizardLM-1.0-Uncensored-Llama2-13b
cognitivecomputations
2024-03-04T16:02:56Z
232
50
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-06T05:24:46Z
--- language: - en license: llama2 datasets: - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split model-index: - name: WizardLM-1.0-Uncensored-Llama2-13b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 55.72 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.44 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 13.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-1.0-Uncensored-Llama2-13b name: Open LLM Leaderboard --- This is a retraining of https://huggingface.co/WizardLM/WizardLM-13B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias. Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-13B-V1.0. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Like WizardLM/WizardLM-13B-V1.0, this model is trained with Vicuna-1.1 style prompts. ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-1.0-Uncensored-Llama2-13b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.31 | | ARC (25-shot) | 55.72 | | HellaSwag (10-shot) | 80.34 | | MMLU (5-shot) | 55.4 | | TruthfulQA (0-shot) | 51.44 | | Winogrande (5-shot) | 74.66 | | GSM8K (5-shot) | 13.27 | | DROP (3-shot) | 14.35 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-1.0-Uncensored-Llama2-13b) | Metric |Value| |---------------------------------|----:| |Avg. |55.14| |AI2 Reasoning Challenge (25-Shot)|55.72| |HellaSwag (10-Shot) |80.34| |MMLU (5-Shot) |55.40| |TruthfulQA (0-shot) |51.44| |Winogrande (5-shot) |74.66| |GSM8k (5-shot) |13.27|
cognitivecomputations/minotaur-llama2-13b-qlora
cognitivecomputations
2024-03-04T16:02:20Z
6
4
peft
[ "peft", "base_model:TheBloke/Llama-2-13B-fp16", "base_model:adapter:TheBloke/Llama-2-13B-fp16", "model-index", "region:us" ]
null
2023-08-03T21:25:23Z
--- library_name: peft base_model: TheBloke/Llama-2-13B-fp16 model-index: - name: minotaur-llama2-13b-qlora results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.57 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 12.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora name: Open LLM Leaderboard --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__minotaur-llama2-13b-qlora) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.54 | | ARC (25-shot) | 60.07 | | HellaSwag (10-shot) | 82.42 | | MMLU (5-shot) | 55.87 | | TruthfulQA (0-shot) | 45.57 | | Winogrande (5-shot) | 76.24 | | GSM8K (5-shot) | 12.05 | | DROP (3-shot) | 14.53 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__minotaur-llama2-13b-qlora) | Metric |Value| |---------------------------------|----:| |Avg. |55.37| |AI2 Reasoning Challenge (25-Shot)|60.07| |HellaSwag (10-Shot) |82.42| |MMLU (5-Shot) |55.87| |TruthfulQA (0-shot) |45.57| |Winogrande (5-shot) |76.24| |GSM8k (5-shot) |12.05|
Kudod/my_fine_tuning_nllb_200_1B3_peft_en_vi_model_nmt
Kudod
2024-03-04T15:55:38Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "dataset:mt_eng_vietnamese", "base_model:facebook/nllb-200-1.3B", "base_model:adapter:facebook/nllb-200-1.3B", "license:cc-by-nc-4.0", "region:us" ]
null
2024-02-29T03:41:21Z
--- license: cc-by-nc-4.0 library_name: peft tags: - generated_from_trainer datasets: - mt_eng_vietnamese metrics: - bleu base_model: facebook/nllb-200-1.3B model-index: - name: my_fine_tuning_nllb_1B3_peft_en_vi_model_nmt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_fine_tuning_nllb_1B3_peft_en_vi_model_nmt This model is a fine-tuned version of [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B) on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set: - Loss: 7.2576 - Bleu: 35.9386 - Gen Len: 30.7029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 7.8689 | 1.0 | 133318 | 7.2597 | 35.8608 | 30.6438 | | 7.8599 | 2.0 | 266636 | 7.2576 | 35.9386 | 30.7029 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.15.2
endorexpress/segformer-b0-finetuned-segments-sidewalk-2
endorexpress
2024-03-04T15:53:40Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-03-04T15:50:34Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 0.6854 - Mean Iou: 0.2132 - Mean Accuracy: 0.2587 - Overall Accuracy: 0.8151 - Accuracy Unlabeled: nan - Accuracy Flat-road: 0.8383 - Accuracy Flat-sidewalk: 0.9497 - Accuracy Flat-crosswalk: 0.0 - Accuracy Flat-cyclinglane: 0.8212 - Accuracy Flat-parkingdriveway: 0.3818 - Accuracy Flat-railtrack: nan - Accuracy Flat-curb: 0.2786 - Accuracy Human-person: 0.0 - Accuracy Human-rider: 0.0 - Accuracy Vehicle-car: 0.9368 - Accuracy Vehicle-truck: 0.0 - Accuracy Vehicle-bus: 0.0 - Accuracy Vehicle-tramtrain: nan - Accuracy Vehicle-motorcycle: 0.0 - Accuracy Vehicle-bicycle: 0.0 - Accuracy Vehicle-caravan: 0.0 - Accuracy Vehicle-cartrailer: 0.0 - Accuracy Construction-building: 0.9300 - Accuracy Construction-door: 0.0 - Accuracy Construction-wall: 0.0951 - Accuracy Construction-fenceguardrail: 0.0012 - Accuracy Construction-bridge: 0.0 - Accuracy Construction-tunnel: nan - Accuracy Construction-stairs: 0.0 - Accuracy Object-pole: 0.0181 - Accuracy Object-trafficsign: 0.0 - Accuracy Object-trafficlight: 0.0 - Accuracy Nature-vegetation: 0.9377 - Accuracy Nature-terrain: 0.8734 - Accuracy Sky: 0.9576 - Accuracy Void-ground: 0.0 - Accuracy Void-dynamic: 0.0 - Accuracy Void-static: 0.0002 - Accuracy Void-unclear: 0.0 - Iou Unlabeled: nan - Iou Flat-road: 0.6565 - Iou Flat-sidewalk: 0.8602 - Iou Flat-crosswalk: 0.0 - Iou Flat-cyclinglane: 0.7150 - Iou Flat-parkingdriveway: 0.2892 - Iou Flat-railtrack: nan - Iou Flat-curb: 0.2447 - Iou Human-person: 0.0 - Iou Human-rider: 0.0 - Iou Vehicle-car: 0.7028 - Iou Vehicle-truck: 0.0 - Iou Vehicle-bus: 0.0 - Iou Vehicle-tramtrain: nan - Iou Vehicle-motorcycle: 0.0 - Iou Vehicle-bicycle: 0.0 - Iou Vehicle-caravan: 0.0 - Iou Vehicle-cartrailer: 0.0 - Iou Construction-building: 0.6164 - Iou Construction-door: 0.0 - Iou Construction-wall: 0.0896 - Iou Construction-fenceguardrail: 0.0012 - Iou Construction-bridge: 0.0 - Iou Construction-tunnel: nan - Iou Construction-stairs: 0.0 - Iou Object-pole: 0.0180 - Iou Object-trafficsign: 0.0 - Iou Object-trafficlight: 0.0 - Iou Nature-vegetation: 0.8065 - Iou Nature-terrain: 0.7196 - Iou Sky: 0.8903 - Iou Void-ground: 0.0 - Iou Void-dynamic: 0.0 - Iou Void-static: 0.0002 - Iou Void-unclear: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:| | 1.133 | 0.05 | 20 | 0.8142 | 0.1927 | 0.2365 | 0.7919 | nan | 0.8488 | 0.9396 | 0.0 | 0.6154 | 0.3232 | nan | 0.0870 | 0.0 | 0.0 | 0.9079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9091 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9459 | 0.8075 | 0.9417 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.5996 | 0.8350 | 0.0 | 0.5839 | 0.2497 | nan | 0.0824 | 0.0 | 0.0 | 0.6972 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5834 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7902 | 0.6980 | 0.8488 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.6183 | 0.1 | 40 | 0.7929 | 0.1935 | 0.2387 | 0.7946 | nan | 0.8424 | 0.9426 | 0.0 | 0.6490 | 0.2786 | nan | 0.0932 | 0.0 | 0.0 | 0.9013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9355 | 0.0 | 0.0078 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9106 | 0.9015 | 0.9372 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.6021 | 0.8449 | 0.0 | 0.5861 | 0.2298 | nan | 0.0889 | 0.0 | 0.0 | 0.6913 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5851 | 0.0 | 0.0078 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7952 | 0.6890 | 0.8787 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.7143 | 0.15 | 60 | 0.7832 | 0.1963 | 0.2407 | 0.7970 | nan | 0.8115 | 0.9508 | 0.0 | 0.6225 | 0.3488 | nan | 0.1208 | 0.0 | 0.0 | 0.9286 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9124 | 0.0 | 0.0163 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9415 | 0.8645 | 0.9439 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.6145 | 0.8356 | 0.0 | 0.5800 | 0.2642 | nan | 0.1131 | 0.0 | 0.0 | 0.6861 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6004 | 0.0 | 0.0161 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7914 | 0.7056 | 0.8799 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.7266 | 0.2 | 80 | 0.7789 | 0.1933 | 0.2380 | 0.7922 | nan | 0.8418 | 0.9346 | 0.0 | 0.6266 | 0.3044 | nan | 0.0522 | 0.0 | 0.0 | 0.9256 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9212 | 0.0 | 0.0297 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9329 | 0.8752 | 0.9345 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5945 | 0.8266 | 0.0 | 0.5830 | 0.2447 | nan | 0.0506 | 0.0 | 0.0 | 0.6819 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6030 | 0.0 | 0.0293 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7998 | 0.6952 | 0.8834 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8732 | 0.25 | 100 | 0.7598 | 0.2025 | 0.2515 | 0.7986 | nan | 0.8657 | 0.9218 | 0.0 | 0.7984 | 0.3960 | nan | 0.1382 | 0.0 | 0.0 | 0.9323 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9264 | 0.0 | 0.0718 | 0.0 | 0.0 | nan | 0.0 | 0.0010 | 0.0 | 0.0 | 0.8819 | 0.9151 | 0.9484 | 0.0 | 0.0 | 0.0001 | 0.0 | nan | 0.6212 | 0.8387 | 0.0 | 0.7013 | 0.2830 | nan | 0.1285 | 0.0 | 0.0 | 0.6971 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6065 | 0.0 | 0.0696 | 0.0 | 0.0 | nan | 0.0 | 0.0010 | 0.0 | 0.0 | 0.7925 | 0.6575 | 0.8817 | 0.0 | 0.0 | 0.0001 | 0.0 | | 1.0414 | 0.3 | 120 | 0.7519 | 0.2004 | 0.2426 | 0.8027 | nan | 0.7989 | 0.9643 | 0.0 | 0.7894 | 0.2927 | nan | 0.0920 | 0.0 | 0.0 | 0.9300 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9095 | 0.0 | 0.0169 | 0.0 | 0.0 | nan | 0.0 | 0.0012 | 0.0 | 0.0 | 0.9403 | 0.8354 | 0.9507 | 0.0 | 0.0 | 0.0003 | 0.0 | nan | 0.6503 | 0.8249 | 0.0 | 0.7135 | 0.2513 | nan | 0.0851 | 0.0 | 0.0 | 0.7073 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5968 | 0.0 | 0.0167 | 0.0 | 0.0 | nan | 0.0 | 0.0012 | 0.0 | 0.0 | 0.7986 | 0.6976 | 0.8675 | 0.0 | 0.0 | 0.0003 | 0.0 | | 0.7812 | 0.35 | 140 | 0.7660 | 0.2004 | 0.2433 | 0.8008 | nan | 0.7714 | 0.9656 | 0.0 | 0.8225 | 0.2306 | nan | 0.1649 | 0.0 | 0.0 | 0.9393 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9225 | 0.0 | 0.0362 | 0.0 | 0.0 | nan | 0.0 | 0.0003 | 0.0 | 0.0 | 0.9328 | 0.8450 | 0.9101 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.6427 | 0.8220 | 0.0 | 0.7043 | 0.2006 | nan | 0.1508 | 0.0 | 0.0 | 0.6825 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5969 | 0.0 | 0.0354 | 0.0 | 0.0 | nan | 0.0 | 0.0003 | 0.0 | 0.0 | 0.8031 | 0.7024 | 0.8709 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.6117 | 0.4 | 160 | 0.7395 | 0.2078 | 0.2505 | 0.8074 | nan | 0.8021 | 0.9599 | 0.0 | 0.7951 | 0.3092 | nan | 0.2320 | 0.0 | 0.0 | 0.9291 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9184 | 0.0 | 0.0807 | 0.0001 | 0.0 | nan | 0.0 | 0.0031 | 0.0 | 0.0 | 0.9379 | 0.8574 | 0.9416 | 0.0 | 0.0 | 0.0001 | 0.0 | nan | 0.6450 | 0.8340 | 0.0 | 0.7138 | 0.2435 | nan | 0.2008 | 0.0 | 0.0 | 0.7102 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6146 | 0.0 | 0.0776 | 0.0001 | 0.0 | nan | 0.0 | 0.0031 | 0.0 | 0.0 | 0.8047 | 0.7070 | 0.8874 | 0.0 | 0.0 | 0.0001 | 0.0 | | 1.1176 | 0.45 | 180 | 0.7283 | 0.2088 | 0.2543 | 0.8066 | nan | 0.7949 | 0.9620 | 0.0 | 0.7781 | 0.3479 | nan | 0.2238 | 0.0 | 0.0 | 0.9365 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8798 | 0.0 | 0.1794 | 0.0000 | 0.0 | nan | 0.0 | 0.0070 | 0.0 | 0.0 | 0.9212 | 0.8978 | 0.9546 | 0.0 | 0.0 | 0.0004 | 0.0 | nan | 0.6398 | 0.8361 | 0.0 | 0.7046 | 0.2608 | nan | 0.1909 | 0.0 | 0.0 | 0.6770 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6329 | 0.0 | 0.1669 | 0.0000 | 0.0 | nan | 0.0 | 0.0070 | 0.0 | 0.0 | 0.7940 | 0.6798 | 0.8840 | 0.0 | 0.0 | 0.0004 | 0.0 | | 1.0874 | 0.5 | 200 | 0.7138 | 0.2074 | 0.2497 | 0.8093 | nan | 0.8548 | 0.9534 | 0.0 | 0.7502 | 0.3509 | nan | 0.2045 | 0.0 | 0.0 | 0.9139 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9440 | 0.0 | 0.0572 | 0.0 | 0.0 | nan | 0.0 | 0.0022 | 0.0 | 0.0 | 0.9277 | 0.8297 | 0.9518 | 0.0 | 0.0 | 0.0003 | 0.0 | nan | 0.6508 | 0.8521 | 0.0 | 0.6877 | 0.2737 | nan | 0.1824 | 0.0 | 0.0 | 0.7291 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5955 | 0.0 | 0.0551 | 0.0 | 0.0 | nan | 0.0 | 0.0022 | 0.0 | 0.0 | 0.8090 | 0.7067 | 0.8854 | 0.0 | 0.0 | 0.0003 | 0.0 | | 1.1744 | 0.55 | 220 | 0.7095 | 0.2072 | 0.2491 | 0.8070 | nan | 0.8193 | 0.9577 | 0.0 | 0.7556 | 0.3430 | nan | 0.2271 | 0.0 | 0.0 | 0.9157 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9490 | 0.0 | 0.0440 | 0.0000 | 0.0 | nan | 0.0 | 0.0031 | 0.0 | 0.0 | 0.9225 | 0.8357 | 0.9500 | 0.0 | 0.0 | 0.0004 | 0.0 | nan | 0.6459 | 0.8463 | 0.0 | 0.7019 | 0.2733 | nan | 0.1944 | 0.0 | 0.0 | 0.7281 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5841 | 0.0 | 0.0412 | 0.0000 | 0.0 | nan | 0.0 | 0.0031 | 0.0 | 0.0 | 0.8140 | 0.7022 | 0.8892 | 0.0 | 0.0 | 0.0004 | 0.0 | | 0.8371 | 0.6 | 240 | 0.7224 | 0.2081 | 0.2506 | 0.8073 | nan | 0.8102 | 0.9614 | 0.0 | 0.7220 | 0.3368 | nan | 0.2390 | 0.0 | 0.0 | 0.9278 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9066 | 0.0 | 0.1151 | 0.0001 | 0.0 | nan | 0.0 | 0.0039 | 0.0 | 0.0 | 0.9439 | 0.8466 | 0.9542 | 0.0 | 0.0 | 0.0005 | 0.0 | nan | 0.6394 | 0.8393 | 0.0 | 0.6850 | 0.2629 | nan | 0.2099 | 0.0 | 0.0 | 0.7123 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6215 | 0.0 | 0.1074 | 0.0001 | 0.0 | nan | 0.0 | 0.0039 | 0.0 | 0.0 | 0.7932 | 0.6883 | 0.8870 | 0.0 | 0.0 | 0.0005 | 0.0 | | 1.0493 | 0.65 | 260 | 0.7100 | 0.2093 | 0.2505 | 0.8086 | nan | 0.8021 | 0.9639 | 0.0 | 0.7634 | 0.3138 | nan | 0.2254 | 0.0 | 0.0 | 0.9232 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9235 | 0.0 | 0.0942 | 0.0011 | 0.0 | nan | 0.0 | 0.0041 | 0.0 | 0.0 | 0.9356 | 0.8677 | 0.9456 | 0.0 | 0.0 | 0.0004 | 0.0 | nan | 0.6457 | 0.8343 | 0.0 | 0.7131 | 0.2547 | nan | 0.1979 | 0.0 | 0.0 | 0.7253 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6121 | 0.0 | 0.0881 | 0.0011 | 0.0 | nan | 0.0 | 0.0041 | 0.0 | 0.0 | 0.8033 | 0.7173 | 0.8899 | 0.0 | 0.0 | 0.0004 | 0.0 | | 0.4048 | 0.7 | 280 | 0.7147 | 0.2112 | 0.2566 | 0.8087 | nan | 0.7952 | 0.9466 | 0.0 | 0.7771 | 0.4525 | nan | 0.3231 | 0.0 | 0.0 | 0.9329 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9329 | 0.0 | 0.0638 | 0.0 | 0.0 | nan | 0.0 | 0.0058 | 0.0 | 0.0 | 0.9506 | 0.8326 | 0.9420 | 0.0 | 0.0 | 0.0002 | 0.0 | nan | 0.6491 | 0.8535 | 0.0 | 0.7154 | 0.2954 | nan | 0.2641 | 0.0 | 0.0 | 0.7080 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5986 | 0.0 | 0.0604 | 0.0 | 0.0 | nan | 0.0 | 0.0058 | 0.0 | 0.0 | 0.7988 | 0.7070 | 0.8915 | 0.0 | 0.0 | 0.0002 | 0.0 | | 0.5975 | 0.75 | 300 | 0.7049 | 0.2116 | 0.2572 | 0.8123 | nan | 0.8359 | 0.9476 | 0.0 | 0.7908 | 0.4403 | nan | 0.2551 | 0.0 | 0.0 | 0.9417 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9331 | 0.0 | 0.0776 | 0.0008 | 0.0 | nan | 0.0 | 0.0070 | 0.0 | 0.0 | 0.9270 | 0.8649 | 0.9522 | 0.0 | 0.0 | 0.0002 | 0.0 | nan | 0.6454 | 0.8593 | 0.0 | 0.7080 | 0.3010 | nan | 0.2257 | 0.0 | 0.0 | 0.6990 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6080 | 0.0 | 0.0732 | 0.0008 | 0.0 | nan | 0.0 | 0.0070 | 0.0 | 0.0 | 0.8192 | 0.7197 | 0.8919 | 0.0 | 0.0 | 0.0002 | 0.0 | | 0.655 | 0.8 | 320 | 0.6919 | 0.2109 | 0.2554 | 0.8130 | nan | 0.8424 | 0.9520 | 0.0 | 0.8082 | 0.3620 | nan | 0.2336 | 0.0 | 0.0 | 0.9297 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9246 | 0.0 | 0.0843 | 0.0011 | 0.0 | nan | 0.0 | 0.0133 | 0.0 | 0.0 | 0.9332 | 0.8843 | 0.9474 | 0.0 | 0.0 | 0.0005 | 0.0 | nan | 0.6512 | 0.8564 | 0.0 | 0.7108 | 0.2799 | nan | 0.2115 | 0.0 | 0.0 | 0.7167 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6164 | 0.0 | 0.0790 | 0.0011 | 0.0 | nan | 0.0 | 0.0132 | 0.0 | 0.0 | 0.8027 | 0.7060 | 0.8923 | 0.0 | 0.0 | 0.0005 | 0.0 | | 0.766 | 0.85 | 340 | 0.6983 | 0.2094 | 0.2539 | 0.8097 | nan | 0.8143 | 0.9616 | 0.0 | 0.8042 | 0.3275 | nan | 0.2248 | 0.0000 | 0.0 | 0.9255 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9310 | 0.0 | 0.1054 | 0.0003 | 0.0 | nan | 0.0 | 0.0160 | 0.0 | 0.0 | 0.8967 | 0.9101 | 0.9536 | 0.0 | 0.0 | 0.0007 | 0.0 | nan | 0.6514 | 0.8454 | 0.0 | 0.7168 | 0.2643 | nan | 0.2028 | 0.0000 | 0.0 | 0.7219 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6178 | 0.0 | 0.0986 | 0.0003 | 0.0 | nan | 0.0 | 0.0159 | 0.0 | 0.0 | 0.7980 | 0.6668 | 0.8919 | 0.0 | 0.0 | 0.0007 | 0.0 | | 0.4367 | 0.9 | 360 | 0.6955 | 0.2123 | 0.2566 | 0.8128 | nan | 0.8090 | 0.9580 | 0.0 | 0.8199 | 0.3753 | nan | 0.2754 | 0.0 | 0.0 | 0.9210 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9402 | 0.0 | 0.0790 | 0.0010 | 0.0 | nan | 0.0 | 0.0143 | 0.0 | 0.0 | 0.9208 | 0.8883 | 0.9510 | 0.0 | 0.0 | 0.0006 | 0.0 | nan | 0.6541 | 0.8514 | 0.0 | 0.7174 | 0.2862 | nan | 0.2401 | 0.0 | 0.0 | 0.7243 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6082 | 0.0 | 0.0741 | 0.0010 | 0.0 | nan | 0.0 | 0.0143 | 0.0 | 0.0 | 0.8106 | 0.7056 | 0.8928 | 0.0 | 0.0 | 0.0006 | 0.0 | | 0.4969 | 0.95 | 380 | 0.6997 | 0.2123 | 0.2559 | 0.8125 | nan | 0.7947 | 0.9629 | 0.0 | 0.8125 | 0.3625 | nan | 0.2558 | 0.0001 | 0.0 | 0.9276 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9267 | 0.0 | 0.1071 | 0.0005 | 0.0 | nan | 0.0 | 0.0181 | 0.0 | 0.0 | 0.9333 | 0.8808 | 0.9485 | 0.0 | 0.0 | 0.0008 | 0.0 | nan | 0.6540 | 0.8439 | 0.0 | 0.7167 | 0.2790 | nan | 0.2245 | 0.0001 | 0.0 | 0.7203 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6212 | 0.0 | 0.1010 | 0.0005 | 0.0 | nan | 0.0 | 0.0180 | 0.0 | 0.0 | 0.8060 | 0.7023 | 0.8932 | 0.0 | 0.0 | 0.0008 | 0.0 | | 0.7571 | 1.0 | 400 | 0.6854 | 0.2132 | 0.2587 | 0.8151 | nan | 0.8383 | 0.9497 | 0.0 | 0.8212 | 0.3818 | nan | 0.2786 | 0.0 | 0.0 | 0.9368 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9300 | 0.0 | 0.0951 | 0.0012 | 0.0 | nan | 0.0 | 0.0181 | 0.0 | 0.0 | 0.9377 | 0.8734 | 0.9576 | 0.0 | 0.0 | 0.0002 | 0.0 | nan | 0.6565 | 0.8602 | 0.0 | 0.7150 | 0.2892 | nan | 0.2447 | 0.0 | 0.0 | 0.7028 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6164 | 0.0 | 0.0896 | 0.0012 | 0.0 | nan | 0.0 | 0.0180 | 0.0 | 0.0 | 0.8065 | 0.7196 | 0.8903 | 0.0 | 0.0 | 0.0002 | 0.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
EchineF/ppo-Pyramids
EchineF
2024-03-04T15:50:21Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-03-04T15:50:18Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: EchineF/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Shashank91097/Idefic_medical_VQA_merged11
Shashank91097
2024-03-04T15:48:20Z
6
0
transformers
[ "transformers", "safetensors", "idefics", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2024-03-04T15:24:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BetterThanNothing/PPO-LunarLander-v2
BetterThanNothing
2024-03-04T15:46:41Z
3
2
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T15:46:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.53 +/- 25.87 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ailoveydovey/pd
ailoveydovey
2024-03-04T15:45:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-04T15:36:40Z
--- license: creativeml-openrail-m ---
euclaise/crow-1b-attempt1
euclaise
2024-03-04T15:41:56Z
111
3
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "dataset:euclaise/SuperMC", "dataset:euclaise/prm800k_preferences", "arxiv:2306.17492", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T02:36:40Z
--- license: apache-2.0 datasets: - euclaise/SuperMC - euclaise/prm800k_preferences model-index: - name: crow-1b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 25.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 25.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 24.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.28 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 49.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard --- Expirements in large-scale small-scale preference learning. **This one was a failure, it benchmarks horribly, despite responding okay to trivia questions in testing** falcon-rw-1b trained with PRO (preference ranking optimization, see https://arxiv.org/abs/2306.17492) on SuperMC and PRM800K (only stage 1) for 3 epochs, using my supertrainer2000 framework. This is an expiremental model. Benchmarks coming soon. Hyperparameters: - AdamW, weight decay of 0.01, otherwise default hyperparams - Maximum LR of 1e-5 - Cosine schedule with a warmup of 5400 steps - Batch size of 4 (2 real x 2 accumulated) - Maximum of 5 epochs, early stopping (visual observation), stopped after 3 - Gradient clipping norm value of 1.0 - PRO beta of 4 Training prompt format: ``` ### Query [insert instruction here] ### Answer [insert response here] ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__crow-1b) | Metric |Value| |---------------------------------|----:| |Avg. |29.12| |AI2 Reasoning Challenge (25-Shot)|25.51| |HellaSwag (10-Shot) |25.87| |MMLU (5-Shot) |24.80| |TruthfulQA (0-shot) |48.28| |Winogrande (5-shot) |49.41| |GSM8k (5-shot) | 0.83|
gorabbani/SpaceInvadersNoFrameskip-v4
gorabbani
2024-03-04T15:38:13Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T15:37:47Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 701.00 +/- 328.14 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gorabbani -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gorabbani -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gorabbani ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
vikash06/mistral_v1
vikash06
2024-03-04T15:36:14Z
1,502
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-23T07:08:08Z
--- license: mit model-index: - name: mistral_v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 47.01 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 67.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 48.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.53 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 9.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1 name: Open LLM Leaderboard --- This model is trained on experimental basis on a small dataset to assess whether training longer on a smaller dataset has a good performance or not. # Model Details vikash06/llama-2-7b-small-model--> Finetuned model on llama2 # Uses # Creative Writing: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better. # Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. # Open QA: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation. # Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. # Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. # Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better. # Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas # Direct Use The model is intnded for direct use # How to Get Started with the Model import torch import pandas as pd from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("vikash06/llama-2-7b-small-model") model = AutoModelForCausalLM.from_pretrained("vikash06/llama-2-7b-small-model", torch_dtype=torch.float16, device_map="cuda:0") print (model) def generate_training_prompt(instruction,context): return f""" Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Context: {context.strip()} """.strip() data1 ={"instruction": "When was the first Reading railway station opened?", "context": "Reading railway station is a major transport hub in Reading, Berkshire, England. It is on the northern edge of the town centre, near the main retail and commercial areas and the River Thames, 36 miles (58 km) from London Paddington. The first Reading station was opened on 30 March 1840 as the temporary western terminus of the original line of the Great Western Railway (GWR). Reading is the ninth-busiest station in the UK outside London and the second busiest interchange station outside London with over 3.8 million passengers changing trains at the station annually.", "response": "The first Reading railway station was opened on the 30th of March, 1840.", "category": "closed_qa"} prompt = generate_training_prompt(data1["instruction"],data1["context"]) input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda(0) outputs = model.generate(input_ids=input_ids, max_new_tokens=128, do_sample=True, top_p=0.9,temperature=0.3) resp = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):].split("\n") resp = [x for x in resp if x!=''] print(resp) # Training Data 1000 samples were carefully selected from each of the category. # Training Procedure We used the below libraries to finetune the llama2-7b: torch==2.1.0 transformers==4.35.2 peft@git+https://github.com/huggingface/peft.git bitsandbytes==0.41.1 trl @ git+https://github.com/lvwerra/trl.git@34e6948d459540a21f80c5be227fb4da039dd97a We used batch size 0f 2 on 50 epochs # Evaluation We performed hellaswag task using evaluation library of EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness below are the results: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a7d07154f1d0225b0b9d1c/fAgFzMAVp9Nazjz0OP5Dh.png) # Environmental Impact Carbon Emitted: 0.432 kg/kWh Offset: 0% hardware: a6000 48GB(3) hours: 28 # Technical Report Detail writeup coming soon # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vikash06__mistral_v1) | Metric |Value| |---------------------------------|----:| |Avg. |45.85| |AI2 Reasoning Challenge (25-Shot)|47.01| |HellaSwag (10-Shot) |67.58| |MMLU (5-Shot) |48.68| |TruthfulQA (0-shot) |37.53| |Winogrande (5-shot) |64.80| |GSM8k (5-shot) | 9.48|
Weni/ZeroShot-3.3.22-Mistral-7b-Multilanguage-3.2.0
Weni
2024-03-04T15:33:28Z
1
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-04T14:36:07Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: ZeroShot-3.3.22-Mistral-7b-Multilanguage-3.2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-3.3.22-Mistral-7b-Multilanguage-3.2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8991 | 0.13 | 100 | 0.8553 | | 0.814 | 0.27 | 200 | 0.7848 | | 0.7839 | 0.4 | 300 | 0.7637 | | 0.7772 | 0.53 | 400 | 0.7485 | | 0.7631 | 0.67 | 500 | 0.7371 | | 0.7436 | 0.8 | 600 | 0.7292 | | 0.7426 | 0.94 | 700 | 0.7268 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
cloudyu/mistral_11B_instruct_v0.1
cloudyu
2024-03-04T15:32:31Z
50
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T15:03:44Z
--- license: cc-by-4.0 --- ``` this is a 48 layers model based on mistral architecture sft by vicgalle/alpaca-gpt4. template is "{instruction} {inputs} \n {output}" ``` example output ``` <s> write me a story aboyt yosemite. Here is a short story about the beauty and wonder of Yosemite National Park: As the sun rose over the mountains, its warm golden rays illuminated the once- shadowed walls of the grandeur that was Yosemite. The early morning hush was broken by the sound of birds singing and the gentle rustling of leaves as the wind blowing through the trees. In the heart of this natural wonder, there lived a young girl named Sarah. She had grown up in the village at the edge of the park, and her parents were some of the few who had made their homes there. They were farmers, but their real love was for the beauty of the land around them. One day, while exploring the woods, Sarah stumbled upon a hidden clearing. It was filled with wildflowers, and in the center stood an old oak tree, its branches stretching out to provide shelter to many bird species. As she sat down under the tree, taking in the beauty of her surroundings, she heard a soft, melodious call. It was a baby deer, and it had fallen into a narrow stream. Its mother was frantic with worry, calling out to her child from the shore. Sarah quickly ran to help, and with the help of a group of travelers passing through the park, they managed to save the baby. From then on, Sarah would often visit the clearing, and soon, the animals began to trust her. She would sit and listen to the birds chirp, watch the butterflies flutter by, and even play with the curious foxes. Years went by, and Sarah grew up to become a renowned conservationist, dedicated to preserving the beauty of nature. And so, thanks to her efforts, generations of people have been able to explore and appreciate the wonder of Yosemite. The sun set over the mountains, casting long shadows across the landscape. As Sarah walked back to her village, she knew that she would never forget the memories she had made within those mountains, and that the magic of Yosemite would always remain with her. And so, the cycle of life continued, with each day bringing new adventures and discoveries within the vast expanse of the beautiful and timeless Yosemite National Park. ```
allensun123/clip-roberta-finetuned
allensun123
2024-03-04T15:31:23Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-text-dual-encoder", "feature-extraction", "generated_from_trainer", "dataset:ydshieh/coco_dataset_script", "endpoints_compatible", "region:us" ]
feature-extraction
2024-03-04T12:23:50Z
--- tags: - generated_from_trainer datasets: - ydshieh/coco_dataset_script model-index: - name: clip-roberta-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-roberta-finetuned This model was trained from scratch on the ydshieh/coco_dataset_script 2017 dataset. It achieves the following results on the evaluation set: - Loss: 1.6970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.38.0 - Pytorch 2.1.2 - Datasets 2.14.7 - Tokenizers 0.15.2
cerc-aai/vicuna-7b-clip-finetune-lora
cerc-aai
2024-03-04T15:31:17Z
7
4
transformers
[ "transformers", "llava", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-11-20T19:35:05Z
# Robin-LLaVA **A LLaVA-fork enabling the Mistral-7B & Open-Hermes-2.5 language models to process images** This repo and the associated models were created in collaboration between the [Robin team at AGI-Collective](#team) and [Simon Ramstedt](https://simonramstedt.com/), with computing resources from [Hessian-AI](https://hessian.ai/) and [OLCF](https://www.olcf.ornl.gov/). The Robin Effort: We study different components and methods to merge pretrained vision and language models with the goal to build better visual understanding models. As part of this first milestone we study the pretrained LLMs (Vicuna, Mistral and OpenHermes 2.5) and Vision models (CLIP and SigLIP), further improving capabilities by finetuning the vision encoder. ## Installation Ideally install into an empty venv (`python -m venv venv && source venv/bin/activate`) ```bash pip install git+ssh://[email protected]/agi-collective/robin-llava.git ``` Note that currently Robin-LlaVA inference requires a Nvidia GPU with 24GB+ since CPU inference and quantized inference don't work reliably yet. Training was done on a machine with 8 x A100 80GB Nvidia GPUs provided by [Hessian-AI](https://hessian.ai/). ## Run interactive command line interface ```bash python -m llava.serve.cli \ --model-path agi-collective/mistral-7b-oh-siglip-so400m-finetune-lora \ --model-base teknium/OpenHermes-2.5-Mistral-7B \ --image-file https://llava-vl.github.io/static/images/view.jpg ``` ## Use as library ```python from llava.serve.pipeline import LlavaMistralPipeline pipe = LlavaMistralPipeline( model_path="agi-collective/mistral-7b-oh-siglip-so400m-finetune-lora", model_base="teknium/OpenHermes-2.5-Mistral-7B", ) messages = [ {"role": "USER", "content": "What's in the image?", "image": "https://llava-vl.github.io/static/images/view.jpg"}, ] messages = pipe(messages) # returns original messages list plus the new response, i.e.: # {"role": "ASSISTANT", "content": ...} ``` ## Available models We use the following components: - **Base LLM**: We explore using Vicuna, Mistral and OpenHermes-2.5 - **Base Vision Model**: We use the SigLIP model since it gives stronger performance on vision benchmarks compared to CLIP - We finetune the Vision Encoder hoping the next token prediction loss helps further improves the vision capabilities of the pretrained vision encoder | Model | Base | GQA | SQA Text | SQA Image | | ------------------------------------------------------------------ | --------------------------------- | ----- | -------- | --------- | | liuhaotian/llava-v1.5-7b | lmsys/vicuna-7b-v1.5 | 62 | 70.43 | 66.8 | | liuhaotian/llava-v1.5-13b | lmsys/vicuna-7b-v1.5 | 63.3 | | 71.6 | | [agi-collective/vicuna-7b-clip-finetune-lora](https://huggingface.co/agi-collective/vicuna-7b-clip-finetune-lora) | lmsys/vicuna-7b-v1.5 | **62.04** | 70.86 | 68.72 | | [agi-collective/vicuna-7b-siglip-so400m-finetune-lora](https://huggingface.co/agi-collective/vicuna-7b-siglip-so400m-finetune-lora) | lmsys/vicuna-7b-v1.5 | 56.79 | 68.76 | 67.48 | | [agi-collective/mistral-7b-siglip-so400m-finetune-lora](https://huggingface.co/agi-collective/mistral-7b-siglip-so400m-finetune-lora) | mistralai/Mistral-7B-v0.1 | 49.44 | 73.66 | 68.57 | | [agi-collective/mistral-7b-oh-siglip-so400m-frozen-ve-finetune-lora](https://huggingface.co/agi-collective/mistral-7b-oh-siglip-so400m-frozen-ve-finetune-lora) | teknium/OpenHermes-2.5-Mistral-7B | 53.59 | 78.17 | 72.73 | | [agi-collective/mistral-7b-oh-siglip-so400m-finetune-lora](https://huggingface.co/agi-collective/mistral-7b-oh-siglip-so400m-finetune-lora) | teknium/OpenHermes-2.5-Mistral-7B | 54.48 | **79.56** | **74.22** | (best 7B model results **highlighted**) ## Evaluations Evaluations were done using the a subset of the evaluation suite in https://github.com/haotian-liu/llava. More evaluations including both vision language and language only benchmarks coming soon! ## Training For training details see the `pretrain.sh` and `finetune_lora.sh` scripts inside `scripts/*/`. The training code is for information purposes only and we won't be able help people debug problems with it for this preliminary release. More rigorous and less rushed and messy training scripts will be released in the future. ## Examples **User:** What's in this image? <img src="https://llava-vl.github.io/static/images/view.jpg" width=50%></img> **Llava7b:** The image features a pier extending out over a large body of water, likely a lake. The pier is made of wood and has a bench at the end, providing a place for people to sit and enjoy the view. The water appears calm and serene, making it an ideal spot for relaxation. **HermesSiglip (Ours):** The image features a wooden pier extending out into a large body of water. The pier is surrounded by a serene and calm lake, providing a peaceful atmosphere. The pier appears to be empty, with no people or boats visible in the scene. The water is calm and still, creating a tranquil environment for those who visit the pier. --- **User:** Will these magnets attract or repel each other? <img src="https://scienceqa-webdemo.s3.us-west-1.amazonaws.com/extra/5727/image.png" width=50%></img> **Llava7b:** The magnets in the image are of different colors and placed next to each other. It is not possible to determine their polarity from the image alone, so it is not possible to definitively say whether they will attract or repel each other. The behavior of magnets depends on their polarity and the orientation of their magnetic fields. To determine their interaction, one would need to observe the magnets in a more controlled environment, such as a magnetic field tester or a magnetic compass. **HermesSiglip (Ours):** In the image, there are two magnets placed on a gray background, one with a green stripe and the other with a blue stripe. Since the magnets have opposite colors, they will attract each other due to their magnetic properties. The magnets are aligned in a way that their magnetic poles are facing each other, which causes them to be attracted to one another. This phenomenon is known as the magnetic attraction between two oppositely charged magnets. ## Limitations For the Open-Hermes-2.5-based model we switched away from the ChatML chat template and back to the Llava default Vicuna chat template. The benchmarks and example results below show that this worked fine but we do believe that we can achieve even better performance and usability by retraining the model with ChatML. ## <a name="team"></a>AGI-Collective - Robin Team Alexis Roger, Andrew R Williams, Daniel Kaplan, Edwin Fennell, George Adamopoulos, Kshitij Gupta, Prateek Humane, Quentin Anthony, Rishika Bhagwatkar, Sun Qi, Yuchen Lu, Irina Rish ## Acknowledgements We would like to thank Hessian-AI for providing us with free access to 8-16 A100 GPUs for a few weeks and to Florian and Patrick at Hessian AI for their support. We would also like to thank Oak Ridge Leadership Computing Facility (OLCF), the DOE Office of Science User Facility. Prelimnary experiments were conducted on the INCITE compute grant on Summit supercomputer supported under Contract DE-AC05-00OR22725. This grant was awarded to AAI CERC lab for their Scalable Foundation Models for Transferrable Generalist AI project. This work was in collaboration with representatives from EleutherAI. The code in this repo is based on [github.com/haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA). <a style="" src=""><img style="" src="https://hessian.ai/wp-content/themes/hessianai/img/hessian-ai-logo.svg"></img></a> <span> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span> <a style="" src=""><img style="width: 40%; height: auto;" src="https://www.olcf.ornl.gov/wp-content/uploads/2016/07/olcf_retina_logo.png"></img></a>
Bakanayatsu/Pony-Diffusion-V6-XL-for-Anime
Bakanayatsu
2024-03-04T15:27:30Z
1,093
9
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-04T10:26:19Z
Same as original [Pony Diffusion for Anime](https://civitai.com/models/315596/pony-diffusion-for-anime) but in diffusers version
bnunticha/lst20-orchid-baseline-new
bnunticha
2024-03-04T15:27:21Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "token-classification", "generated_from_trainer", "base_model:airesearch/wangchanberta-base-att-spm-uncased", "base_model:finetune:airesearch/wangchanberta-base-att-spm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-04T13:42:30Z
--- base_model: airesearch/wangchanberta-base-att-spm-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: lst20-orchid-baseline-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lst20-orchid-baseline-new This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1383 - Precision: 0.8460 - Recall: 0.6761 - F1: 0.7516 - Accuracy: 0.9460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1753 | 1.0 | 1425 | 0.1463 | 0.8332 | 0.6466 | 0.7281 | 0.9417 | | 0.1513 | 2.0 | 2850 | 0.1457 | 0.8829 | 0.6099 | 0.7214 | 0.9431 | | 0.1393 | 3.0 | 4275 | 0.1388 | 0.8607 | 0.6495 | 0.7403 | 0.9450 | | 0.129 | 4.0 | 5700 | 0.1394 | 0.8561 | 0.6596 | 0.7451 | 0.9455 | | 0.1266 | 5.0 | 7125 | 0.1383 | 0.8460 | 0.6761 | 0.7516 | 0.9460 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
haturusinghe/sinhala_off_finetuned_completions_llama2_7b
haturusinghe
2024-03-04T15:25:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-04T14:26:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sid-th26/llama-upsc-new
sid-th26
2024-03-04T15:25:31Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T12:20:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
core-3/kuno-royale-7B
core-3
2024-03-04T15:21:53Z
61
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "eren23/ogno-monarch-jaskier-merge-7b", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:eren23/ogno-monarch-jaskier-merge-7b", "base_model:merge:eren23/ogno-monarch-jaskier-merge-7b", "license:cc-by-nc-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T20:21:08Z
--- license: cc-by-nc-2.0 tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Kunoichi-DPO-v2-7B - eren23/ogno-monarch-jaskier-merge-7b base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - eren23/ogno-monarch-jaskier-merge-7b model-index: - name: kuno-royale-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.2 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.12 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B name: Open LLM Leaderboard --- # kuno-royale-7B [v2 is probably better](https://huggingface.co/core-3/kuno-royale-v2-7b) 🤷 |Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |-------------------|---------|-----|-----------|------|------------|------------|-------| | eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 | | [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b) | 74.80 | 72.01 | 88.15 | 65.07 | 71.10 | 82.24 | 70.20 | | **core-3/kuno-royale-7B** | **74.74** | **71.76** | **88.20** | **65.13** | **71.12** | **82.32** | **69.90** | SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 | | SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 | ## Original LazyMergekit Card: kuno-royale-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: eren23/ogno-monarch-jaskier-merge-7b layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "core-3/kuno-royale-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ailoveydovey/jgrnt
ailoveydovey
2024-03-04T15:20:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-04T15:10:04Z
--- license: creativeml-openrail-m ---
AnonymousPaper/ITS_Bert_french_difficulty
AnonymousPaper
2024-03-04T15:19:23Z
4
0
transformers
[ "transformers", "safetensors", "endpoints_compatible", "region:us" ]
null
2024-03-04T12:55:17Z
Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/8d/d1/8dd1132838a2f524b0f1b9e72148d913e7ce9da22c3db1a34e5e8a3cb75011d4/98b45ea81164d1e1a1dd82255207053b15cd6c69d922a1c5cf3387ce604d4b74?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739048148&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0ODE0OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzhkL2QxLzhkZDExMzI4MzhhMmY1MjRiMGYxYjllNzIxNDhkOTEzZTdjZTlkYTIyYzNkYjFhMzRlNWU4YTNjYjc1MDExZDQvOThiNDVlYTgxMTY0ZDFlMWExZGQ4MjI1NTIwNzA1M2IxNWNkNmM2OWQ5MjJhMWM1Y2YzMzg3Y2U2MDRkNGI3ND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=GSZi5EE8GAYzi5GE-irwt7ogql%7EV51heg-mnTX0BpKPdvHHlxVJK8R%7EJ%7E6IaMqX8%7EVSHqOuTQNaDtxPgvf4s%7E7VAtNriU%7EzVrvFSrP77oJ2cUkjqCPYLBMojX6okbnRlgIJlm2EfQMpvRIvpQt2C87V8mqagndnzZViQqtbVr5Xuu6HblwEU29jHZQnilSpAFVaJpaj-bk3zvkW9NompX6YkX--BYYCk-6Adv2jRPgiSzXkS3uJuO%7EG208fxkG5r8BQrDWdBsI75d-jnU4uADuevaTBd2lNZcIYKACxvS2r1BxWZ6dt35LrbAfHhk36j92ebQzWadOzIKk%7E0cxTkTw__&Key-Pair-Id=K24J24Z295AEI9
mlabonne/Monarch-7B
mlabonne
2024-03-04T15:18:10Z
354
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "base_model:mlabonne/NeuBeagle-7B", "base_model:merge:mlabonne/NeuBeagle-7B", "base_model:mlabonne/NeuralOmniBeagle-7B", "base_model:merge:mlabonne/NeuralOmniBeagle-7B", "base_model:mlabonne/OmniTruthyBeagle-7B-v0", "base_model:merge:mlabonne/OmniTruthyBeagle-7B-v0", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-13T11:14:30Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit base_model: - mlabonne/OmniTruthyBeagle-7B-v0 - mlabonne/NeuBeagle-7B - mlabonne/NeuralOmniBeagle-7B model-index: - name: Monarch-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.35 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/zDCZ6uIu68k1JeCOa9bHl.jpeg) # Monarch-7B **Update 13/02/24: Monarch-7B is the best-performing model on the YALL leaderboard.** Monarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0) * [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B) * [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B) ## 🏆 Evaluation The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**Monarch-7B**](https://huggingface.co/mlabonne/Monarch-7B) [📄](https://gist.github.com/mlabonne/0b8d057c5ece41e0290580a108c7a093) | **62.68** | **45.48** | **77.07** | **78.04** | **50.14** | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | | [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 | | [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 | | [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B) [📄](https://gist.github.com/CultriX-Github/dbdde67ead233df0c7c56f1b091f728c) | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 | | [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) [📄](https://gist.github.com/CultriX-Github/df0502599867d4043b45d9dafb5976e8) | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 | ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: mlabonne/OmniTruthyBeagle-7B-v0 parameters: density: 0.65 weight: 0.36 - model: mlabonne/NeuBeagle-7B parameters: density: 0.6 weight: 0.34 - model: mlabonne/NeuralOmniBeagle-7B parameters: density: 0.6 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Monarch-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Monarch-7B) | Metric |Value| |---------------------------------|----:| |Avg. |76.25| |AI2 Reasoning Challenge (25-Shot)|73.04| |HellaSwag (10-Shot) |89.03| |MMLU (5-Shot) |64.41| |TruthfulQA (0-shot) |77.35| |Winogrande (5-shot) |84.61| |GSM8k (5-shot) |69.07|
mlabonne/NeuralBeagle14-7B
mlabonne
2024-03-04T15:18:05Z
161
158
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "conversational", "base_model:mlabonne/Beagle14-7B", "base_model:finetune:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T19:09:40Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - dpo - rlhf base_model: mlabonne/Beagle14-7B model-index: - name: NeuralBeagle14-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 69.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.28 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B name: Open LLM Leaderboard --- ![](https://i.imgur.com/89ZAKcn.png) # 🐶 NeuralBeagle14-7B **Update 01/16/24: NeuralBeagle14-7B is (probably) the best 7B model you can find! 🎉** NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac). It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1), based on jondurbin's [repo](https://github.com/jondurbin/bagel) and [jondurbin/bagel-v0.3](https://huggingface.co/datasets/jondurbin/bagel-v0.3]) * [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp), based on [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪 You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralBeagle14-7B-GGUF-Chat) (GGUF Q4_K_M). ## 🔍 Applications This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template. Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling. ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralBeagle14-7B-GGUF * **GPTQ**: https://huggingface.co/TheBloke/NeuralBeagle14-7B-GPTQ * **AWQ**: https://huggingface.co/TheBloke/NeuralBeagle14-7B-AWQ * **EXL2**: https://huggingface.co/LoneStriker/NeuralBeagle14-7B-8.0bpw-h8-exl2 ## 🏆 Evaluation ### Open LLM Leaderboard NeuralBeagle14-7B ranks first on the Open LLM Leaderboard in the ~7B category. ![](https://i.imgur.com/4nAzJsr.png) It has the same average score as Beagle14-7B ("Show merges"), which could be due to might be due to an unlucky run. I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point, since this dataset or its original version are present in multiple models. I need to find more high-quality preference data for the next DPO merge. Note that some models like udkai/Turdus and nfaheem/Marcoroni-7b-DPO-Merge are unfortunately contaminated on purpose (see the very high Winogrande score). ### Nous The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date. | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | **60.25** | **46.06** | **76.77** | **70.32** | **47.86** | | [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [📄](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 | | [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 | | [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [📄](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 | | [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [📄](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 | | [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [📄](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralBeagle14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p>
Gayyuuu/lion-cub-azg
Gayyuuu
2024-03-04T15:17:58Z
0
0
null
[ "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-03-04T15:15:47Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Lion-Cub-Azg Dreambooth model trained by Gayyuuu following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: TCE-917723H010 Sample pictures of this concept: ![0](https://huggingface.co/Gayyuuu/lion-cub-azg/resolve/main/sample_images/Azg_04.png)
mlabonne/Beagle14-7B
mlabonne
2024-03-04T15:17:41Z
2,411
15
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "fblgit/UNA-TheBeagle-7b-v1", "argilla/distilabeled-Marcoro14-7B-slerp", "base_model:argilla/distilabeled-Marcoro14-7B-slerp", "base_model:merge:argilla/distilabeled-Marcoro14-7B-slerp", "base_model:fblgit/UNA-TheBeagle-7b-v1", "base_model:merge:fblgit/UNA-TheBeagle-7b-v1", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T08:14:35Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - fblgit/UNA-TheBeagle-7b-v1 - argilla/distilabeled-Marcoro14-7B-slerp base_model: - fblgit/UNA-TheBeagle-7b-v1 - argilla/distilabeled-Marcoro14-7B-slerp model-index: - name: Beagle14-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.7 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 68.88 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Beagle14-7B name: Open LLM Leaderboard --- # Beagle14-7B **Update 01/16/24: Check the DPO fine-tuned version of this model, [NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) (probably the best 7B model you can find)! 🎉** Beagle14-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1) * [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) ## 🏆 Evaluation The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |----------------------------------------------------------|------:|------:|---------:|-------:|------:| |[**Beagle14-7B**](https://huggingface.co/mlabonne/Beagle14-7B)| **44.38**| **76.53**| **69.44**| **47.25**| **59.4**| |[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42| |[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51| |[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| 47.79| 74.69| 55.92| 44.84| 55.81| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)| 45.21| 75.91| 63.81| 47.31| 58.06| ## 🧩 Configuration ```yaml slices: - sources: - model: fblgit/UNA-TheBeagle-7b-v1 layer_range: [0, 32] - model: argilla/distilabeled-Marcoro14-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: fblgit/UNA-TheBeagle-7b-v1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Beagle14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Beagle14-7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.76| |AI2 Reasoning Challenge (25-Shot)|72.95| |HellaSwag (10-Shot) |87.95| |MMLU (5-Shot) |64.70| |TruthfulQA (0-shot) |68.88| |Winogrande (5-shot) |82.64| |GSM8k (5-shot) |71.42|
mlabonne/Darewin-7B
mlabonne
2024-03-04T15:17:37Z
52
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "base_model:Intel/neural-chat-7b-v3-3", "base_model:merge:Intel/neural-chat-7b-v3-3", "base_model:OpenPipe/mistral-ft-optimized-1227", "base_model:merge:OpenPipe/mistral-ft-optimized-1227", "base_model:fblgit/una-cybertron-7b-v2-bf16", "base_model:merge:fblgit/una-cybertron-7b-v2-bf16", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:openaccess-ai-collective/DPOpenHermes-7B-v2", "base_model:merge:openaccess-ai-collective/DPOpenHermes-7B-v2", "base_model:openchat/openchat-3.5-0106", "base_model:merge:openchat/openchat-3.5-0106", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T00:49:44Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit base_model: - Intel/neural-chat-7b-v3-3 - openaccess-ai-collective/DPOpenHermes-7B-v2 - fblgit/una-cybertron-7b-v2-bf16 - openchat/openchat-3.5-0106 - OpenPipe/mistral-ft-optimized-1227 - mlabonne/NeuralHermes-2.5-Mistral-7B model-index: - name: Darewin-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.6 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.38 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.04 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B name: Open LLM Leaderboard --- # Darewin-7B Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) * [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) * [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) * [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) * [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: Intel/neural-chat-7b-v3-3 parameters: density: 0.6 weight: 0.2 - model: openaccess-ai-collective/DPOpenHermes-7B-v2 parameters: density: 0.6 weight: 0.1 - model: fblgit/una-cybertron-7b-v2-bf16 parameters: density: 0.6 weight: 0.2 - model: openchat/openchat-3.5-0106 parameters: density: 0.6 weight: 0.15 - model: OpenPipe/mistral-ft-optimized-1227 parameters: density: 0.6 weight: 0.25 - model: mlabonne/NeuralHermes-2.5-Mistral-7B parameters: density: 0.6 weight: 0.1 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Darewin-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B) | Metric |Value| |---------------------------------|----:| |Avg. |71.87| |AI2 Reasoning Challenge (25-Shot)|68.60| |HellaSwag (10-Shot) |86.22| |MMLU (5-Shot) |65.21| |TruthfulQA (0-shot) |60.38| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |71.04|
mlabonne/Darewin-7B-v2
mlabonne
2024-03-04T15:17:28Z
61
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "conversational", "base_model:Intel/neural-chat-7b-v3-3", "base_model:merge:Intel/neural-chat-7b-v3-3", "base_model:Open-Orca/Mistral-7B-OpenOrca", "base_model:merge:Open-Orca/Mistral-7B-OpenOrca", "base_model:OpenPipe/mistral-ft-optimized-1227", "base_model:merge:OpenPipe/mistral-ft-optimized-1227", "base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser", "base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:openaccess-ai-collective/DPOpenHermes-7B-v2", "base_model:merge:openaccess-ai-collective/DPOpenHermes-7B-v2", "base_model:openchat/openchat-3.5-0106", "base_model:merge:openchat/openchat-3.5-0106", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T13:16:11Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit base_model: - OpenPipe/mistral-ft-optimized-1227 - Intel/neural-chat-7b-v3-3 - openchat/openchat-3.5-0106 - openaccess-ai-collective/DPOpenHermes-7B-v2 - mlabonne/NeuralHermes-2.5-Mistral-7B - cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser - Open-Orca/Mistral-7B-OpenOrca model-index: - name: Darewin-7B-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 78.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 53.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 19.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B-v2 name: Open LLM Leaderboard --- # Darewin-7B-v2 Darewin-7B-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) * [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) * [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) * [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) * [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) * [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.2 # No parameters necessary for base model - model: OpenPipe/mistral-ft-optimized-1227 parameters: density: 0.6 weight: 0.25 - model: Intel/neural-chat-7b-v3-3 parameters: density: 0.55 weight: 0.2 - model: openchat/openchat-3.5-0106 parameters: density: 0.5 weight: 0.2 - model: openaccess-ai-collective/DPOpenHermes-7B-v2 parameters: density: 0.45 weight: 0.1 - model: mlabonne/NeuralHermes-2.5-Mistral-7B parameters: density: 0.4 weight: 0.1 - model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser parameters: density: 0.4 weight: 0.1 - model: Open-Orca/Mistral-7B-OpenOrca parameters: density: 0.3 weight: 0.05 merge_method: dare_ties base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Darewin-7B-v2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B-v2) | Metric |Value| |---------------------------------|----:| |Avg. |56.34| |AI2 Reasoning Challenge (25-Shot)|62.63| |HellaSwag (10-Shot) |78.28| |MMLU (5-Shot) |53.01| |TruthfulQA (0-shot) |50.99| |Winogrande (5-shot) |73.95| |GSM8k (5-shot) |19.18|
mlabonne/NeuralMonarch-7B
mlabonne
2024-03-04T15:16:59Z
13,666
12
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "lazymergekit", "dpo", "rlhf", "conversational", "en", "base_model:mlabonne/Monarch-7B", "base_model:finetune:mlabonne/Monarch-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-14T04:38:45Z
--- language: - en license: cc-by-nc-4.0 tags: - merge - lazymergekit - dpo - rlhf dataset: - mlabonne/truthy-dpo-v0.1 - mlabonne/distilabel-intel-orca-dpo-pairs base_model: - mlabonne/Monarch-7B model-index: - name: NeuralMonarch-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.21 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.79 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 67.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LxRUvkSATmy-UDKN54Q3H.jpeg) # 👑 NeuralMonarch-7B NeuralMonarch-7B is a DPO fine-tuned of [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B/) using the [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) and [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference datasets. It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0) * [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B) * [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B) Special thanks to [Jon Durbin](https://huggingface.co/jondurbin), [Intel](https://huggingface.co/Intel), and [Argilla](https://huggingface.co/argilla) for the preference datasets. **Try the demo**: https://huggingface.co/spaces/mlabonne/NeuralMonarch-7B-GGUF-Chat ## 🔍 Applications This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio). Compared to other 7B models, it performs well in instruction following and reasoning tasks. For a chat/RP model with strong reasoning abilities, check out [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B). ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralMonarch-7B-GGUF ## 🏆 Evaluation ### Nous NeuralMonarch-7B is one of the best-performing 7B models on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**NeuralMonarch-7B**](https://huggingface.co/mlabonne/NeuralMonarch-7B) [📄](https://gist.github.com/mlabonne/64050c96c6aa261a8f5b403190c8dee4) | **62.73** | **45.31** | **76.99** | **78.35** | **50.28** | | [AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [📄](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 | | [Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) [📄](https://gist.github.com/mlabonne/0b8d057c5ece41e0290580a108c7a093) | 62.68 | 45.48 | 77.07 | 78.04 | 50.14 | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | | [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 | | [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 | | [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B) [📄](https://gist.github.com/mlabonne/0e49d591787185fa5ae92ca5d9d4a1fd) | 62.3 | 45.85 | 77.26 | 76.06 | 50.03 | | [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B) [📄](https://gist.github.com/CultriX-Github/dbdde67ead233df0c7c56f1b091f728c) | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 | | [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) [📄](https://gist.github.com/CultriX-Github/df0502599867d4043b45d9dafb5976e8) | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 | ### EQ-bench NeuralMonarch-7B is also outperforming 70B and 120B parameter models on [EQ-bench](https://eqbench.com/) by [Samuel J. Paech](https://twitter.com/sam_paech), who kindly ran the evaluations. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/dnCFxieqLiAC3Ll6CfdZW.png) ### Open LLM Leaderboard NeuralMonarch-7B is one of the best-performing 7B models on the Open LLM Leaderboard. ### MT-Bench ``` ########## First turn ########## score model turn gpt-4 1 8.95625 OmniBeagle-7B 1 8.31250 AlphaMonarch-7B 1 8.23750 claude-v1 1 8.15000 NeuralMonarch-7B 1 8.09375 gpt-3.5-turbo 1 8.07500 claude-instant-v1 1 7.80000 ########## Second turn ########## score model turn gpt-4 2 9.025000 claude-instant-v1 2 8.012658 OmniBeagle-7B 2 7.837500 gpt-3.5-turbo 2 7.812500 claude-v1 2 7.650000 AlphaMonarch-7B 2 7.618750 NeuralMonarch-7B 2 7.375000 ########## Average ########## score model gpt-4 8.990625 OmniBeagle-7B 8.075000 gpt-3.5-turbo 7.943750 AlphaMonarch-7B 7.928125 claude-instant-v1 7.905660 claude-v1 7.900000 NeuralMonarch-7B 7.734375 NeuralBeagle14-7B 7.628125 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralMonarch-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mlabonne/NeuralMarcoro14-7B
mlabonne
2024-03-04T15:16:01Z
2,499
39
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mlabonne/Marcoro14-7B-slerp", "dpo", "rlhf", "merge", "mergekit", "lazymergekit", "conversational", "dataset:mlabonne/chatml_dpo_pairs", "base_model:mlabonne/Marcoro14-7B-slerp", "base_model:finetune:mlabonne/Marcoro14-7B-slerp", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T16:28:52Z
--- license: cc-by-nc-4.0 tags: - mlabonne/Marcoro14-7B-slerp - dpo - rlhf - merge - mergekit - lazymergekit datasets: - mlabonne/chatml_dpo_pairs base_model: mlabonne/Marcoro14-7B-slerp model-index: - name: NeuralMarcoro14-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.64 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B name: Open LLM Leaderboard --- ![](https://i.imgur.com/CBen22L.jpg) # NeuralMarcoro14-7B This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark. It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24). You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M). ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF ## 🏆 Evaluation ### Open LLM Leaderboard ![](https://i.imgur.com/Int9P07.png) ![](https://i.imgur.com/70NXUKD.png) ### Nous | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average| |-------------------------|------:|------:|---------:|-------:|------:| |[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |Change | -0.07| -0.07| +1.79| +1.26| +0.73| ## 🧩 Training hyperparameters **LoRA**: * r=16 * lora_alpha=16 * lora_dropout=0.05 * bias="none" * task_type="CAUSAL_LM" * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] **Training arguments**: * per_device_train_batch_size=4 * gradient_accumulation_steps=4 * gradient_checkpointing=True * learning_rate=5e-5 * lr_scheduler_type="cosine" * max_steps=200 * optim="paged_adamw_32bit" * warmup_steps=100 **DPOTrainer**: * beta=0.1 * max_prompt_length=1024 * max_length=1536 ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralMarcoro14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
piazzola/test1
piazzola
2024-03-04T15:15:25Z
4
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/t5-efficient-base", "base_model:finetune:google/t5-efficient-base", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-02T05:56:21Z
--- license: cc-by-nc-nd-4.0 base_model: google/t5-efficient-base tags: - generated_from_trainer model-index: - name: checkpoint results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # how to use the model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch tokenizer = AutoTokenizer.from_pretrained("piazzola/test1") model = AutoModelForSeq2SeqLM.from_pretrained("piazzola/test1") sentence = "i left the keys in the car." with torch.no_grad(): inputs = tokenizer([sentence], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) generated_text = tokenizer.decode(outputs[0]) print(generated_text) ``` # checkpoint This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.3088 | 0.3 | 7458 | 0.2634 | | 0.2615 | 0.6 | 14916 | 0.2143 | | 0.2294 | 0.9 | 22374 | 0.1951 | | 0.2137 | 1.2 | 29832 | 0.1830 | | 0.1944 | 1.5 | 37290 | 0.1736 | | 0.1918 | 1.8 | 44748 | 0.1682 | | 0.18 | 2.1 | 52206 | 0.1659 | | 0.1801 | 2.4 | 59664 | 0.1623 | | 0.185 | 2.7 | 67122 | 0.1609 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
mlabonne/OmniBeagle-7B
mlabonne
2024-03-04T15:15:00Z
115
21
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "base_model:shadowml/BeagSake-7B", "base_model:merge:shadowml/BeagSake-7B", "base_model:shadowml/BeagleSempra-7B", "base_model:merge:shadowml/BeagleSempra-7B", "base_model:shadowml/WestBeagle-7B", "base_model:merge:shadowml/WestBeagle-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T16:34:35Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit base_model: - shadowml/BeagleSempra-7B - shadowml/BeagSake-7B - shadowml/WestBeagle-7B model-index: - name: OmniBeagle-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.45 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/OmniBeagle-7B name: Open LLM Leaderboard --- # OmniBeagle-7B OmniBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [shadowml/BeagleSempra-7B](https://huggingface.co/shadowml/BeagleSempra-7B) * [shadowml/BeagSake-7B](https://huggingface.co/shadowml/BeagSake-7B) * [shadowml/WestBeagle-7B](https://huggingface.co/shadowml/WestBeagle-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: shadowml/BeagleSempra-7B parameters: density: 0.65 weight: 0.4 - model: shadowml/BeagSake-7B parameters: density: 0.6 weight: 0.35 - model: shadowml/WestBeagle-7B parameters: density: 0.6 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/OmniBeagle-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__OmniBeagle-7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.66| |AI2 Reasoning Challenge (25-Shot)|72.61| |HellaSwag (10-Shot) |88.93| |MMLU (5-Shot) |64.80| |TruthfulQA (0-shot) |74.45| |Winogrande (5-shot) |83.11| |GSM8k (5-shot) |70.05|
ImperialIndians23/RobertaBaseUnprocessed
ImperialIndians23
2024-03-04T15:13:52Z
4
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-01T14:07:49Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: RobertaBaseUnprocessed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RobertaBaseUnprocessed This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3215 - Accuracy: 0.9217 - F1: 0.5661 - Precision: 0.5978 - Recall: 0.5377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3156 | 1.0 | 1047 | 0.2877 | 0.9245 | 0.5123 | 0.664 | 0.4171 | | 0.0477 | 2.0 | 2094 | 0.3275 | 0.9269 | 0.5263 | 0.6855 | 0.4271 | | 0.0712 | 3.0 | 3141 | 0.3215 | 0.9217 | 0.5661 | 0.5978 | 0.5377 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
RENREN6/llama2-fine-tuned-openassistant
RENREN6
2024-03-04T15:10:16Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T14:33:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Koshti10/llava-v1.6-34b-hf
Koshti10
2024-03-04T15:06:25Z
3
0
transformers
[ "transformers", "safetensors", "llava", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-03-02T12:10:54Z
--- inference: false license: apache-2.0 --- <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) **Model date:** LLaVA-v1.6-34B was trained in December 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
sophosympatheia/Aurora-Nights-70B-v1.0
sophosympatheia
2024-03-04T15:03:43Z
1,444
22
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-23T17:08:11Z
--- language: - en license: llama2 model-index: - name: Aurora-Nights-70B-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 70.47 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.81 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.34 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Aurora-Nights-70B-v1.0 name: Open LLM Leaderboard --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/aGUU0O9.png" alt="AuroraNights" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is a blend of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) into the resultant blend. See the bottom of this card for the exact settings used. This model is good at both following instructions and producing creative, uncensored storytelling and roleplaying content. This model turned out quite uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips * I recommend keeping your max context to around 6144 tokens, although you can push higher if you don't mind some decrease in coherence. * I recommend using Quadratic Sampling (i.e. smoothing factor) as it's good stuff. Experiment with values between 0.2 and 0.5. * I recommend using Min-P. This model seems to work well with Min-P values in the entire range from low settings like 0.05 to high settings like 0.9 when paired with smoothing factor. Experiment to find your best setting. * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor. * You don't *need* to use a high repetition penalty with this model, but it tolerates high rep penalty, so experiment to find the right value for your preferences. Experiment with any and all of the settings below! I'm not a sampler wizard, and what suits my preferences may not suit yours. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.35, "rep_pen": 1.15, "rep_pen_range": 2800, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "dynatemp": false, "min_temp": 0.8, "max_temp": 1.35, "dynatemp_exponent": 1, "smoothing_factor": 0.4, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 2, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "logit_bias": [], "n": 1, "rep_pen_size": 0, "genamt": 500, "max_length": 6144 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}", "example_separator": "", "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Vicuna instruction format for this blend but others work too. Tulu's format is another good one to use. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. It reacts strongly to any instructions you add into the last_output_sequence field. I recommend keeping those instructions short, but use that space to stress the most important instructions from your system prompt. The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I recommend that you slim down the template system prompt below because it's heavy on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are a creative and detail-oriented AI assistant engaged in an uncensored iterative roleplaying experience as {{char}} with me, the user, playing the roles of {{user}} and narrator. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Provide creative, intelligent, coherent, and vivid accounts of {{char}}'s actions and reactions based on recent instructions, {{char}}'s persona, and all available information about past events.\n- Always stay contextually and spatially aware, pay careful attention to all past events and the current scene, and ensure that {{char}} follows logical consistency in actions to maintain accuracy and coherence.\n- Explicit adult content and profanity are permitted.\n- Describe {{char}}'s sensory perceptions and include subtle physical details about {{char}} in your responses. Vary these details to keep the roleplay fresh and engaging.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts, enclose those thoughts in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your contributions as {{char}}, staying focused on action and dialogue over exposition and narrative.\n- Only the user may advance time in the roleplay. Keep the progression grounded in the present context.", "system_sequence": "", "stop_sequence": "", "input_sequence": "USER:\n", "output_sequence": "ASSISTANT:\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "ASSISTANT(roleplay exclusively as {{char}} ensuring logical consistency with spacial awareness and past events to maintain accuracy and coherence):\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). Tulu also has its own license, available at https://allenai.org/impact-license. I am not a lawyer and I do not profess to know how multiple licenses intersect in a merge of LLM model weights. You should consult with a lawyer before using any model merge beyond private use. ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b # primary parameters: density: 1.0 weight: 0.4 - model: Xwin-LM_Xwin-LM-70B-V0.1 # secondary parameters: density: 0.7 weight: 0.3 - model: dreamgen_opus-v0.5-70b # supporting, good at storytelling and roleplay parameters: density: 0.2 weight: 0.6 merge_method: dare_ties base_model: NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float32 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sophosympatheia__Aurora-Nights-70B-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |73.77| |AI2 Reasoning Challenge (25-Shot)|71.33| |HellaSwag (10-Shot) |88.33| |MMLU (5-Shot) |70.47| |TruthfulQA (0-shot) |62.81| |Winogrande (5-shot) |83.35| |GSM8k (5-shot) |66.34|
ciebson/calculator_model_test
ciebson
2024-03-04T14:59:22Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T13:38:22Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6077 | 1.0 | 41 | 2.0310 | | 1.6982 | 2.0 | 82 | 1.5525 | | 1.5473 | 3.0 | 123 | 1.5432 | | 1.5418 | 4.0 | 164 | 1.5405 | | 1.5401 | 5.0 | 205 | 1.5405 | | 1.5387 | 6.0 | 246 | 1.5401 | | 1.5366 | 7.0 | 287 | 1.5359 | | 1.5292 | 8.0 | 328 | 1.5337 | | 1.5289 | 9.0 | 369 | 1.5313 | | 1.5248 | 10.0 | 410 | 1.5324 | | 1.5174 | 11.0 | 451 | 1.5129 | | 1.4809 | 12.0 | 492 | 1.4740 | | 1.4456 | 13.0 | 533 | 1.4299 | | 1.3497 | 14.0 | 574 | 1.2158 | | 1.1149 | 15.0 | 615 | 0.9632 | | 0.8827 | 16.0 | 656 | 0.7687 | | 0.7432 | 17.0 | 697 | 0.6696 | | 0.6651 | 18.0 | 738 | 0.6081 | | 0.614 | 19.0 | 779 | 0.5694 | | 0.5734 | 20.0 | 820 | 0.5423 | | 0.5422 | 21.0 | 861 | 0.5248 | | 0.5221 | 22.0 | 902 | 0.5087 | | 0.5089 | 23.0 | 943 | 0.5026 | | 0.4984 | 24.0 | 984 | 0.4946 | | 0.4857 | 25.0 | 1025 | 0.4880 | | 0.4778 | 26.0 | 1066 | 0.4881 | | 0.4739 | 27.0 | 1107 | 0.4882 | | 0.4698 | 28.0 | 1148 | 0.4853 | | 0.4628 | 29.0 | 1189 | 0.4862 | | 0.4581 | 30.0 | 1230 | 0.4849 | | 0.4537 | 31.0 | 1271 | 0.4839 | | 0.4504 | 32.0 | 1312 | 0.4871 | | 0.4467 | 33.0 | 1353 | 0.4867 | | 0.4438 | 34.0 | 1394 | 0.4886 | | 0.4395 | 35.0 | 1435 | 0.4899 | | 0.4362 | 36.0 | 1476 | 0.4903 | | 0.4338 | 37.0 | 1517 | 0.4910 | | 0.4304 | 38.0 | 1558 | 0.4913 | | 0.4274 | 39.0 | 1599 | 0.4928 | | 0.4252 | 40.0 | 1640 | 0.4927 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
lbruderer/text-to-sql
lbruderer
2024-03-04T14:58:55Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-04T14:56:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quiorte/codebert-Java-8m
quiorte
2024-03-04T14:57:55Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:neulab/codebert-java", "base_model:adapter:neulab/codebert-java", "region:us" ]
null
2024-03-04T14:56:00Z
--- library_name: peft base_model: neulab/codebert-java --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
Azzedde/phi2-lora-chess
Azzedde
2024-03-04T14:57:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-04T14:57:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
argilla/distilabeled-Marcoro14-7B-slerp
argilla
2024-03-04T14:57:35Z
27
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "distilabel", "dpo", "rlaif", "rlhf", "merge", "mergekit", "en", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:mlabonne/Marcoro14-7B-slerp", "base_model:finetune:mlabonne/Marcoro14-7B-slerp", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T08:59:34Z
--- language: - en license: cc-by-nc-4.0 tags: - distilabel - dpo - rlaif - rlhf - merge - mergekit datasets: - argilla/distilabel-intel-orca-dpo-pairs base_model: mlabonne/Marcoro14-7B-slerp model-index: - name: distilabeled-Marcoro14-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.47 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.1 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp name: Open LLM Leaderboard --- # ⚗️ distilabeled Marcoro14 7B Slerp <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> ## Introduction This model is a new DPO fine-tune of our new open dataset [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs), on the [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) model. You can find more information of the "distilabeled" dataset used at this repo [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B/blob/main/README.md#introduction), and visit [distilabel](https://github.com/argilla-io/distilabel). ## Training details As we did with [Notus](https://argilla.io/blog/notus7b/), we wanted a reproducible recipe to test the impact of data quality. And we're lucky to have so many amazing folks in the open community contributing reproducible, easy-to-use training scripts and recipes. This time, [Maxime Labonne](https://twitter.com/maximelabonne) had shared a [Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) to fine-tune OpenHermes with DPO and the original Intel's dataset, perfect! We just updated the base model to [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp), and applied the same dataset recipe we used for [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B/blob/main/README.md#introduction): ```python from datasets import load_dataset # Instead of this: # dataset = load_dataset("Intel/orca_dpo_pairs", split="train") # we did this dataset = load_dataset("argilla/distilabel-intel-orca-dpo-pairs", split="train") dataset = dataset.filter( lambda r: r["status"] != "tie" and r["chosen_score"] >= 8 and not r["in_gsm8k_train"] ) ``` ## Benchmark results For benchmarking we used the famous "Nous" or "Teknium" benchmark. You can find below an overview, including our first experiment with a less ambitious dataset filtering (removing ties and `score>5`). For running the benchmark we used another awesome contribution from Maxime: [LLM AutoEval](https://github.com/mlabonne/llm-autoeval), check it out! | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average| |-------------------------|------:|------:|---------:|-------:|------:| |[argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)| **45.4**| **76.47**| **65.46**| **47.19**| **58.63**| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |[argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B) | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 | ### Training Hardware We used 1 x A100 80GB in runpod for less than 1 hour. ## Acknowledgements We'd like to thank the amazing open community and in particular: * The Intel team for publishing a great open dataset and show how well it worked in the first place * Teknium and NousResearch for their awesome work and models. * Maxime for sharing such great resources. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |73.63| |AI2 Reasoning Challenge (25-Shot)|70.73| |HellaSwag (10-Shot) |87.47| |MMLU (5-Shot) |65.22| |TruthfulQA (0-shot) |65.10| |Winogrande (5-shot) |82.08| |GSM8k (5-shot) |71.19|
argilla/DistilabelBeagle14-7B
argilla
2024-03-04T14:57:27Z
11
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "rlaif", "distilabel", "conversational", "arxiv:1910.09700", "base_model:mlabonne/Beagle14-7B", "base_model:finetune:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T13:03:59Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - dpo - rlhf - rlaif - distilabel base_model: mlabonne/Beagle14-7B model-index: - name: DistilabelBeagle14-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 68.91 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 36.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/DistilabelBeagle14-7B name: Open LLM Leaderboard --- # Model Card for Model ID This is a preference tuned version of `mlabonne/Beagle14-7B` using a mix of Argilla's orca pairs and a new upcoming multi-turn dpo dataset. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Argilla - **Model type:** [More Information Needed] - **Language(s) (NLP):** English - **License:** cc-by-nc-4.0 - **Finetuned from model [optional]:** mlabonne/Beagle14-7B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__DistilabelBeagle14-7B) | Metric |Value| |---------------------------------|----:| |Avg. |67.52| |AI2 Reasoning Challenge (25-Shot)|71.08| |HellaSwag (10-Shot) |87.00| |MMLU (5-Shot) |61.27| |TruthfulQA (0-shot) |68.91| |Winogrande (5-shot) |80.74| |GSM8k (5-shot) |36.09|
argilla/notux-8x7b-v1
argilla
2024-03-04T14:57:05Z
58
165
transformers
[ "transformers", "tensorboard", "safetensors", "mixtral", "text-generation", "dpo", "rlaif", "preference", "ultrafeedback", "moe", "conversational", "en", "de", "es", "fr", "it", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-12T16:11:49Z
--- language: - en - de - es - fr - it license: apache-2.0 library_name: transformers tags: - dpo - rlaif - preference - ultrafeedback - moe datasets: - argilla/ultrafeedback-binarized-preferences-cleaned base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 pipeline_tag: text-generation model-index: - name: notux-8x7b-v1 results: [] --- <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/dj-spsk9eXMMXVGxK6jRz.png" alt="A banner representing Notus, the wind god of the south, in a mythical and artistic style. The banner features a strong, swirling breeze, embodying the warm, wet character of the southern wind. Gracefully flowing across the scene are several paper planes, caught in the gentle yet powerful gusts of Notus. The background is a blend of warm colors, symbolizing the heat of the south, with hints of blue and green to represent the moisture carried by this wind. The overall atmosphere is one of dynamic movement and warmth."/> </div> # Model Card for Notux 8x7B-v1 This model is a preference-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) dataset using DPO (Direct Preference Optimization). As of Dec 26th 2023, it outperforms `Mixtral-8x7B-Instruct-v0.1` and is the top ranked MoE (Mixture of Experts) model on the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This is part of the Notus family of models and experiments, where the Argilla team investigates data-first and preference tuning methods like dDPO (distilled DPO). This model is the result of our first experiment at tuning a MoE model that has already been fine-tuned with DPO (i.e., Mixtral-8x7B-Instruct-v0.1). ## Model Details ### Model Description - **Developed by:** Argilla (based on MistralAI previous efforts) - **Shared by:** Argilla - **Model type:** Pretrained generative Sparse Mixture of Experts - **Language(s) (NLP):** English, Spanish, Italian, German, and French - **License:** MIT - **Finetuned from model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ### Model Sources - **Repository:** https://github.com/argilla-io/notus - **Paper:** N/A ## Training Details ### Training Hardware We used a VM with 8 x H100 80GB hosted in runpod.io for 1 epoch (~10hr). ### Training Data We used a new iteration of the Argilla UltraFeedback preferences dataset named [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.4384 | 0.22 | 200 | 0.4556 | -0.3275 | -1.9448 | 0.7937 | 1.6174 | -405.7994 | -397.8617 | -1.3157 | -1.4511 | | 0.4064 | 0.43 | 400 | 0.4286 | -0.2163 | -2.2090 | 0.8254 | 1.9927 | -408.4409 | -396.7496 | -0.7660 | -0.6539 | | 0.3952 | 0.65 | 600 | 0.4275 | -0.1311 | -2.1603 | 0.8016 | 2.0291 | -407.9537 | -395.8982 | -0.6783 | -0.7206 | | 0.3909 | 0.87 | 800 | 0.4167 | -0.2273 | -2.3146 | 0.8135 | 2.0872 | -409.4968 | -396.8602 | -0.8458 | -0.7738 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.15.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__notus-8x7b-experiment) | Metric |Value| |---------------------------------|----:| |Avg. |73.18| |AI2 Reasoning Challenge (25-Shot)|70.99| |HellaSwag (10-Shot) |87.73| |MMLU (5-Shot) |71.33| |TruthfulQA (0-shot) |65.79| |Winogrande (5-shot) |81.61| |GSM8k (5-shot) |61.64|
argilla/CapybaraHermes-2.5-Mistral-7B
argilla
2024-03-04T14:56:58Z
42
68
trl
[ "trl", "safetensors", "mistral", "distilabel", "dpo", "rlaif", "rlhf", "en", "dataset:argilla/dpo-mix-7k", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "model-index", "region:us" ]
null
2024-01-30T19:27:04Z
--- language: - en license: apache-2.0 library_name: trl tags: - distilabel - dpo - rlaif - rlhf datasets: - argilla/dpo-mix-7k base_model: teknium/OpenHermes-2.5-Mistral-7B model-index: - name: CapybaraHermes-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.91 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/CapybaraHermes-2.5-Mistral-7B name: Open LLM Leaderboard --- # CapybaraHermes-2.5-Mistral-7B <div> <img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Vmr0FtTvnny6Snm-UDM_n.png"> </div> <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> This model is the launching partner of the [capybara-dpo dataset](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-9k-binarized) build with ⚗️ distilabel. It's a preference tuned [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k). To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench: | Model | AGIEval | GPT4All | TruthfulQA | Bigbench | MTBench First Turn | MTBench Second Turn | Nous avg. | MTBench avg. | |-----------------------------------|---------|---------|------------|----------|------------|-------------|-----------|--------------| | argilla/CapybaraHermes-2.5-Mistral-7B | **43.8** | **73.35** | 57.07 | **42.44** | 8.24375 | **7.5625** | 54.16 | **7.903125** | | teknium/OpenHermes-2.5-Mistral-7B | 42.75 | 72.99 | 52.99 | 40.94 | **8.25** | 7.2875 | 52.42 | 7.76875 | | Mistral-7B-Instruct-v0.2 | 38.5 | 71.64 | **66.82** | 42.29 | 7.8375 | 7.1 | **54.81** | 7.46875 | The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores. For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see [ YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) for reference): | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |------------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)| 45.29| 76.92| 71.66| 48.78| 60.66| ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Argilla - **Shared by [optional]:** Argilla - **Model type:** 7B chat model - **Language(s) (NLP):** English - **License:** Same as OpenHermes - **Finetuned from model [optional]:** [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__CapybaraHermes-2.5-Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |68.14| |AI2 Reasoning Challenge (25-Shot)|65.78| |HellaSwag (10-Shot) |85.45| |MMLU (5-Shot) |63.13| |TruthfulQA (0-shot) |56.91| |Winogrande (5-shot) |78.30| |GSM8k (5-shot) |59.29|
Aryanne/sheared-plus-westlake-normal
Aryanne
2024-03-04T14:45:14Z
1,392
2
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "merge", "mergekit", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T20:09:57Z
--- license: apache-2.0 tags: - merge - mergekit model-index: - name: sheared-plus-westlake-normal results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 39.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 70.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.5 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 63.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard --- Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model. Recipe: ``` merge_method: task_anysize base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT models: - model: senseable/WestLake-7B-v2 parameters: weight: 1.0 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-normal) | Metric |Value| |---------------------------------|----:| |Avg. |41.16| |AI2 Reasoning Challenge (25-Shot)|39.76| |HellaSwag (10-Shot) |70.33| |MMLU (5-Shot) |26.81| |TruthfulQA (0-shot) |46.50| |Winogrande (5-shot) |63.54| |GSM8k (5-shot) | 0.00|
adambiel22/calculator_model_test
adambiel22
2024-03-04T14:43:20Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T14:17:03Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4753 | 1.0 | 13 | 1.7336 | | 1.3843 | 2.0 | 26 | 0.9380 | | 0.8281 | 3.0 | 39 | 0.7141 | | 0.6509 | 4.0 | 52 | 0.5507 | | 0.5643 | 5.0 | 65 | 0.5065 | | 0.5155 | 6.0 | 78 | 0.4642 | | 0.445 | 7.0 | 91 | 0.3848 | | 0.3912 | 8.0 | 104 | 0.3584 | | 0.3564 | 9.0 | 117 | 0.3142 | | 0.3175 | 10.0 | 130 | 0.2663 | | 0.2765 | 11.0 | 143 | 0.2291 | | 0.2436 | 12.0 | 156 | 0.1915 | | 0.2186 | 13.0 | 169 | 0.1556 | | 0.1646 | 14.0 | 182 | 0.1097 | | 0.1504 | 15.0 | 195 | 0.0862 | | 0.113 | 16.0 | 208 | 0.0725 | | 0.0864 | 17.0 | 221 | 0.0500 | | 0.0722 | 18.0 | 234 | 0.0425 | | 0.0654 | 19.0 | 247 | 0.0389 | | 0.0581 | 20.0 | 260 | 0.0312 | | 0.052 | 21.0 | 273 | 0.0284 | | 0.0459 | 22.0 | 286 | 0.0275 | | 0.0426 | 23.0 | 299 | 0.0213 | | 0.0381 | 24.0 | 312 | 0.0200 | | 0.0374 | 25.0 | 325 | 0.0186 | | 0.0334 | 26.0 | 338 | 0.0185 | | 0.0307 | 27.0 | 351 | 0.0157 | | 0.0292 | 28.0 | 364 | 0.0147 | | 0.0269 | 29.0 | 377 | 0.0146 | | 0.0261 | 30.0 | 390 | 0.0129 | | 0.0248 | 31.0 | 403 | 0.0121 | | 0.0237 | 32.0 | 416 | 0.0114 | | 0.023 | 33.0 | 429 | 0.0115 | | 0.0217 | 34.0 | 442 | 0.0109 | | 0.0208 | 35.0 | 455 | 0.0101 | | 0.0205 | 36.0 | 468 | 0.0102 | | 0.0197 | 37.0 | 481 | 0.0098 | | 0.0193 | 38.0 | 494 | 0.0096 | | 0.019 | 39.0 | 507 | 0.0094 | | 0.0188 | 40.0 | 520 | 0.0095 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
TeeZee/Kyllene-57B-v1.0
TeeZee
2024-03-04T14:38:50Z
67
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-16T03:40:58Z
--- license: other tags: - merge license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE model-index: - name: Kyllene-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 73.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.89 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Kyllene-v1.0 name: Open LLM Leaderboard --- # Kyllene 57B v1.0 ![image/png](https://huggingface.co/TeeZee/Kyllene-v1.0/resolve/main/Kyllene.jpg) ## Model Details - A result of using SLERP merge of [TeeZee/2xbagel-dpo-34b-v0.2](https://huggingface.co/TeeZee/2xbagel-dpo-34b-v0.2) and [TeeZee/2xNous-Capybara-34B](https://huggingface.co/TeeZee/2xNous-Capybara-34B). - The resulting model has 100 layers and approximately 57 billion parameters. - 200K context length - See [mergekit-config.yml](https://huggingface.co/TeeZee/Kyllene-v1.0/blob/main/mergekit-config.yml) for details on the merge method used. **Warning: This model can produce NSFW content!** ## Results - produces SFW nad NSFW content without issues, switches context seamlessly. - good at following instructions - pretty close to my ideal RPG/ERP everyday model (but more tests are needed) All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: <a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__Kyllene-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |64.86| |AI2 Reasoning Challenge (25-Shot)|64.85| |HellaSwag (10-Shot) |84.51| |MMLU (5-Shot) |73.33| |TruthfulQA (0-shot) |57.89| |Winogrande (5-shot) |78.22| |GSM8k (5-shot) |30.40|
Kukedlc/NeuralFusion-7b-Dare-Ties
Kukedlc
2024-03-04T14:36:08Z
51
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/NeuralMaxime-7B-slerp", "Kukedlc/Fasciculus-Arcuatus-7B-slerp", "Kukedlc/NeoCortex-7B-slerp", "base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "base_model:merge:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "base_model:Kukedlc/NeoCortex-7B-slerp", "base_model:merge:Kukedlc/NeoCortex-7B-slerp", "base_model:Kukedlc/NeuralMaxime-7B-slerp", "base_model:merge:Kukedlc/NeuralMaxime-7B-slerp", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T21:09:16Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralMaxime-7B-slerp - Kukedlc/Fasciculus-Arcuatus-7B-slerp - Kukedlc/NeoCortex-7B-slerp base_model: - Kukedlc/NeuralMaxime-7B-slerp - Kukedlc/Fasciculus-Arcuatus-7B-slerp - Kukedlc/NeoCortex-7B-slerp model-index: - name: NeuralFusion-7b-Dare-Ties results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.21 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 73.32 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties name: Open LLM Leaderboard --- # NeuralFusion-7b-Dare-Ties NeuralFusion-7b-Dare-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp) * [Kukedlc/Fasciculus-Arcuatus-7B-slerp](https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp) * [Kukedlc/NeoCortex-7B-slerp](https://huggingface.co/Kukedlc/NeoCortex-7B-slerp) ## 🧩 Configuration ```yaml models: - model: mlabonne/Monarch-7B # no parameters necessary for base model - model: Kukedlc/NeuralMaxime-7B-slerp parameters: density: 0.65 weight: 0.36 - model: Kukedlc/Fasciculus-Arcuatus-7B-slerp parameters: density: 0.6 weight: 0.34 - model: Kukedlc/NeoCortex-7B-slerp parameters: density: 0.6 weight: 0.3 merge_method: dare_ties base_model: mlabonne/Monarch-7B parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralFusion-7b-Dare-Ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralFusion-7b-Dare-Ties) | Metric |Value| |---------------------------------|----:| |Avg. |75.94| |AI2 Reasoning Challenge (25-Shot)|73.21| |HellaSwag (10-Shot) |88.96| |MMLU (5-Shot) |64.77| |TruthfulQA (0-shot) |73.32| |Winogrande (5-shot) |85.56| |GSM8k (5-shot) |69.83|
ludziej/calculator_model_test
ludziej
2024-03-04T14:36:00Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-27T13:30:00Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9786 | 1.0 | 6 | 2.2468 | | 2.0116 | 2.0 | 12 | 1.7255 | | 1.5628 | 3.0 | 18 | 1.3183 | | 1.2273 | 4.0 | 24 | 1.0790 | | 1.0219 | 5.0 | 30 | 0.9549 | | 0.9147 | 6.0 | 36 | 0.8300 | | 0.7993 | 7.0 | 42 | 0.7326 | | 0.7256 | 8.0 | 48 | 0.7028 | | 0.6677 | 9.0 | 54 | 0.6345 | | 0.6369 | 10.0 | 60 | 0.5926 | | 0.5883 | 11.0 | 66 | 0.5722 | | 0.5506 | 12.0 | 72 | 0.5835 | | 0.5504 | 13.0 | 78 | 0.5491 | | 0.5232 | 14.0 | 84 | 0.5165 | | 0.4977 | 15.0 | 90 | 0.4719 | | 0.4754 | 16.0 | 96 | 0.4910 | | 0.4706 | 17.0 | 102 | 0.4623 | | 0.4659 | 18.0 | 108 | 0.4202 | | 0.4186 | 19.0 | 114 | 0.3720 | | 0.385 | 20.0 | 120 | 0.3768 | | 0.3692 | 21.0 | 126 | 0.3751 | | 0.3775 | 22.0 | 132 | 0.3508 | | 0.3359 | 23.0 | 138 | 0.3427 | | 0.3412 | 24.0 | 144 | 0.3015 | | 0.3128 | 25.0 | 150 | 0.2826 | | 0.2892 | 26.0 | 156 | 0.2688 | | 0.28 | 27.0 | 162 | 0.2512 | | 0.2653 | 28.0 | 168 | 0.2385 | | 0.255 | 29.0 | 174 | 0.2275 | | 0.249 | 30.0 | 180 | 0.2108 | | 0.2316 | 31.0 | 186 | 0.1894 | | 0.2184 | 32.0 | 192 | 0.1777 | | 0.2057 | 33.0 | 198 | 0.1644 | | 0.186 | 34.0 | 204 | 0.1527 | | 0.1853 | 35.0 | 210 | 0.1484 | | 0.1721 | 36.0 | 216 | 0.1380 | | 0.165 | 37.0 | 222 | 0.1336 | | 0.1544 | 38.0 | 228 | 0.1299 | | 0.153 | 39.0 | 234 | 0.1288 | | 0.158 | 40.0 | 240 | 0.1268 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MaciekB/calculator_model_test
MaciekB
2024-03-04T14:35:10Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T14:28:27Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6778 | 1.0 | 6 | 3.1191 | | 2.7094 | 2.0 | 12 | 2.1966 | | 1.9597 | 3.0 | 18 | 1.7446 | | 1.7331 | 4.0 | 24 | 1.6583 | | 1.5938 | 5.0 | 30 | 1.6425 | | 1.5338 | 6.0 | 36 | 1.5592 | | 1.5077 | 7.0 | 42 | 1.5055 | | 1.4896 | 8.0 | 48 | 1.4880 | | 1.4419 | 9.0 | 54 | 1.4727 | | 1.4062 | 10.0 | 60 | 1.3960 | | 1.3459 | 11.0 | 66 | 1.3129 | | 1.2961 | 12.0 | 72 | 1.3037 | | 1.2268 | 13.0 | 78 | 1.2964 | | 1.2251 | 14.0 | 84 | 1.1677 | | 1.1559 | 15.0 | 90 | 1.1312 | | 1.1157 | 16.0 | 96 | 1.1714 | | 1.1385 | 17.0 | 102 | 1.1348 | | 1.0996 | 18.0 | 108 | 1.1113 | | 1.0407 | 19.0 | 114 | 0.9871 | | 0.9734 | 20.0 | 120 | 0.9324 | | 0.9512 | 21.0 | 126 | 0.9743 | | 0.951 | 22.0 | 132 | 0.9441 | | 0.917 | 23.0 | 138 | 0.8909 | | 0.8726 | 24.0 | 144 | 0.9193 | | 0.8937 | 25.0 | 150 | 0.8686 | | 0.8351 | 26.0 | 156 | 0.8182 | | 0.8397 | 27.0 | 162 | 0.7957 | | 0.8148 | 28.0 | 168 | 0.7851 | | 0.7866 | 29.0 | 174 | 0.7707 | | 0.7579 | 30.0 | 180 | 0.7610 | | 0.7516 | 31.0 | 186 | 0.7259 | | 0.734 | 32.0 | 192 | 0.7193 | | 0.7375 | 33.0 | 198 | 0.7392 | | 0.7284 | 34.0 | 204 | 0.7019 | | 0.7283 | 35.0 | 210 | 0.6881 | | 0.6968 | 36.0 | 216 | 0.6745 | | 0.69 | 37.0 | 222 | 0.6672 | | 0.6877 | 38.0 | 228 | 0.6606 | | 0.6741 | 39.0 | 234 | 0.6575 | | 0.6741 | 40.0 | 240 | 0.6527 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
saishf/Top-Western-Maid-7B
saishf
2024-03-04T14:33:41Z
50
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:Undi95/Toppy-M-7B", "base_model:merge:Undi95/Toppy-M-7B", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T12:23:32Z
--- license: cc-by-nc-4.0 tags: - mergekit - merge base_model: - NeverSleep/Noromaid-7B-0.4-DPO - Undi95/Toppy-M-7B - mistralai/Mistral-7B-v0.1 - senseable/WestLake-7B-v2 model-index: - name: Top-Western-Maid-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.4 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.63 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 58.79 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Top-Western-Maid-7B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) * [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B) * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.53 weight: 0.55 - model: NeverSleep/Noromaid-7B-0.4-DPO parameters: density: 0.53 weight: 0.30 - model: Undi95/Toppy-M-7B parameters: density: 0.53 weight: 0.15 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Top-Western-Maid-7B) | Metric |Value| |---------------------------------|----:| |Avg. |71.57| |AI2 Reasoning Challenge (25-Shot)|69.37| |HellaSwag (10-Shot) |87.40| |MMLU (5-Shot) |64.63| |TruthfulQA (0-shot) |58.79| |Winogrande (5-shot) |83.27| |GSM8k (5-shot) |65.96|
saishf/West-Maid-7B
saishf
2024-03-04T14:33:29Z
66
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T01:37:23Z
--- license: cc-by-nc-4.0 tags: - mergekit - merge base_model: - senseable/WestLake-7B-v2 - NeverSleep/Noromaid-7B-0.4-DPO model-index: - name: West-Maid-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: senseable/WestLake-7B-v2 layer_range: [0, 32] - model: NeverSleep/Noromaid-7B-0.4-DPO layer_range: [0, 32] merge_method: slerp base_model: senseable/WestLake-7B-v2 parameters: t: - filter: self_attn value: [0.6, 0.7, 0.8, 0.9, 1] - filter: mlp value: [0.4, 0.3, 0.2, 0.1, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__West-Maid-7B) | Metric |Value| |---------------------------------|----:| |Avg. |69.09| |AI2 Reasoning Challenge (25-Shot)|67.24| |HellaSwag (10-Shot) |86.44| |MMLU (5-Shot) |64.85| |TruthfulQA (0-shot) |51.00| |Winogrande (5-shot) |82.72| |GSM8k (5-shot) |62.32|
luqmanxyz/Maya_Hermes-2.5-Mistral-7B
luqmanxyz
2024-03-04T14:33:15Z
1,353
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T00:51:55Z
--- license: apache-2.0 datasets: - argilla/distilabel-intel-orca-dpo-pairs model-index: - name: Maya_Hermes-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.3 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.89 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/Maya_Hermes-2.5-Mistral-7B name: Open LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model is a DPO finetuned variation of https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B using the argilla/distilabel-intel-orca-dpo-pairs # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_luqmanxyz__Maya_Hermes-2.5-Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |68.60| |AI2 Reasoning Challenge (25-Shot)|66.30| |HellaSwag (10-Shot) |85.07| |MMLU (5-Shot) |63.23| |TruthfulQA (0-shot) |55.89| |Winogrande (5-shot) |78.85| |GSM8k (5-shot) |62.24|
saishf/Kuro-Lotus-10.7B
saishf
2024-03-04T14:33:07Z
119
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:BlueNipples/SnowLotus-v2-10.7B", "base_model:merge:BlueNipples/SnowLotus-v2-10.7B", "base_model:Himitsui/KuroMitsu-11B", "base_model:merge:Himitsui/KuroMitsu-11B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T00:46:52Z
--- license: cc-by-nc-4.0 tags: - mergekit - merge base_model: - BlueNipples/SnowLotus-v2-10.7B - Himitsui/KuroMitsu-11B model-index: - name: Kuro-Lotus-10.7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.69 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 58.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [BlueNipples/SnowLotus-v2-10.7B](https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B) * [Himitsui/KuroMitsu-11B](https://huggingface.co/Himitsui/KuroMitsu-11B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Himitsui/KuroMitsu-11B layer_range: [0, 48] - model: BlueNipples/SnowLotus-v2-10.7B layer_range: [0, 48] merge_method: slerp base_model: Himitsui/KuroMitsu-11B parameters: t: - filter: self_attn value: [0.6, 0.7, 0.8, 0.9, 1] - filter: mlp value: [0.4, 0.3, 0.2, 0.1, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Kuro-Lotus-10.7B) | Metric |Value| |---------------------------------|----:| |Avg. |71.90| |AI2 Reasoning Challenge (25-Shot)|68.69| |HellaSwag (10-Shot) |87.51| |MMLU (5-Shot) |66.64| |TruthfulQA (0-shot) |58.27| |Winogrande (5-shot) |84.21| |GSM8k (5-shot) |66.11|
luqmanxyz/LelaStarling-7B
luqmanxyz
2024-03-04T14:32:49Z
64
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Lelantos-DPO-7B", "berkeley-nest/Starling-LM-7B-alpha", "conversational", "base_model:SanjiWatsuki/Lelantos-DPO-7B", "base_model:merge:SanjiWatsuki/Lelantos-DPO-7B", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:merge:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T23:14:53Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Lelantos-DPO-7B - berkeley-nest/Starling-LM-7B-alpha base_model: - SanjiWatsuki/Lelantos-DPO-7B - berkeley-nest/Starling-LM-7B-alpha model-index: - name: LelaStarling-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.73 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B name: Open LLM Leaderboard --- # LelaStarling-7B LelaStarling-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Lelantos-DPO-7B](https://huggingface.co/SanjiWatsuki/Lelantos-DPO-7B) * [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Lelantos-DPO-7B layer_range: [0, 32] - model: berkeley-nest/Starling-LM-7B-alpha layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Lelantos-DPO-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "luqmanxyz/LelaStarling-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_luqmanxyz__LelaStarling-7B) | Metric |Value| |---------------------------------|----:| |Avg. |71.45| |AI2 Reasoning Challenge (25-Shot)|67.58| |HellaSwag (10-Shot) |86.33| |MMLU (5-Shot) |64.98| |TruthfulQA (0-shot) |57.73| |Winogrande (5-shot) |80.98| |GSM8k (5-shot) |71.11|
saishf/Kuno-Lake-7B
saishf
2024-03-04T14:32:29Z
113
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:07:56Z
--- license: cc-by-nc-4.0 tags: - mergekit - merge base_model: - mistralai/Mistral-7B-v0.1 - senseable/WestLake-7B-v2 - SanjiWatsuki/Kunoichi-DPO-v2-7B model-index: - name: Kuno-Lake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.84 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.76 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.83 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuno-Lake-7B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.53 weight: 0.65 - model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: density: 0.53 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Kuno-Lake-7B) | Metric |Value| |---------------------------------|----:| |Avg. |73.56| |AI2 Reasoning Challenge (25-Shot)|71.84| |HellaSwag (10-Shot) |88.15| |MMLU (5-Shot) |64.76| |TruthfulQA (0-shot) |66.83| |Winogrande (5-shot) |84.45| |GSM8k (5-shot) |65.35|
Vengeance12/Contineu
Vengeance12
2024-03-04T14:27:02Z
6
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-04T14:26:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sloppyTurtle221/calculator_model_test
sloppyTurtle221
2024-03-04T14:26:18Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T14:25:00Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3715 | 1.0 | 6 | 2.7668 | | 2.4459 | 2.0 | 12 | 2.0165 | | 1.8589 | 3.0 | 18 | 1.7392 | | 1.7744 | 4.0 | 24 | 1.6714 | | 1.6018 | 5.0 | 30 | 1.5897 | | 1.4921 | 6.0 | 36 | 1.5386 | | 1.4816 | 7.0 | 42 | 1.4612 | | 1.4355 | 8.0 | 48 | 1.4503 | | 1.33 | 9.0 | 54 | 1.3248 | | 1.2827 | 10.0 | 60 | 1.2280 | | 1.234 | 11.0 | 66 | 1.2248 | | 1.2229 | 12.0 | 72 | 1.3745 | | 1.2522 | 13.0 | 78 | 1.1725 | | 1.1299 | 14.0 | 84 | 1.0781 | | 1.0669 | 15.0 | 90 | 1.0417 | | 1.0125 | 16.0 | 96 | 1.0053 | | 0.9977 | 17.0 | 102 | 1.0263 | | 1.0611 | 18.0 | 108 | 1.0528 | | 1.0357 | 19.0 | 114 | 0.9557 | | 0.927 | 20.0 | 120 | 0.9334 | | 0.9075 | 21.0 | 126 | 0.8948 | | 0.8795 | 22.0 | 132 | 0.9888 | | 0.9473 | 23.0 | 138 | 0.9332 | | 0.8718 | 24.0 | 144 | 0.8529 | | 0.8661 | 25.0 | 150 | 0.8421 | | 0.8381 | 26.0 | 156 | 0.8280 | | 0.7939 | 27.0 | 162 | 0.7921 | | 0.8426 | 28.0 | 168 | 0.7751 | | 0.7897 | 29.0 | 174 | 0.7592 | | 0.7687 | 30.0 | 180 | 0.7575 | | 0.77 | 31.0 | 186 | 0.7346 | | 0.7479 | 32.0 | 192 | 0.7266 | | 0.7247 | 33.0 | 198 | 0.7156 | | 0.7278 | 34.0 | 204 | 0.7154 | | 0.7241 | 35.0 | 210 | 0.6853 | | 0.7037 | 36.0 | 216 | 0.6897 | | 0.6949 | 37.0 | 222 | 0.6697 | | 0.7135 | 38.0 | 228 | 0.6661 | | 0.6864 | 39.0 | 234 | 0.6619 | | 0.6812 | 40.0 | 240 | 0.6587 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
GIZ/VULNERABILITY-multilabel-mpnet
GIZ
2024-03-04T14:25:18Z
54
1
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "region:us" ]
text-classification
2024-02-26T14:24:06Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: Implementing the reform required strong support from all ministries involved. A major effort was required to present the conceptual change to car importers, politicians and the public. A great deal was also invested in public relations to describe the benefits of the tax, which by many was perceived as yet another attempt to increase tax revenues. A number of the most popular car models’ prices were about to increase – mostly large family, luxury and sport cars – but for many models, the retail price was actually reduced. - text: Workers in the formal sector. Formal sector workers also face economic risks. A number of them experience income instability due to contractualization, retrenchment, and firm closures. In 2014, contractual workers accounted for 22 percent of the total 4.5 million workers employed in establishments with 20 or more employees. - text: Building additional dams and power stations to further develop energy generation potential from the same river flow as well as develop new dam sites on parallel rivers in order to maintain the baseline hydropower electricity generation capacity to levels attainable under a ‘no-climate change’ scenario. Developing and implementing climate change compatible building/construction codes for buildings, roads, airports, airfields, dry ports, railways, bridges, dams and irrigation canals that are safe for human life and minimize economic damage that is likely to result from increasing extremes in flooding. - text: Another factor that increases farmer vulnerability is the remoteness of farm villages and lack of adequate road infrastructure. Across the three regions, roads are in a poor state and unevenly distributed, with many villages lacking roads that connect them to other villages. Even the main roads are often accessible only during the dry season. The livelihood implications of this isolation are significant, as farmers have difficulties getting their products to markets as well as obtaining agricultural inputs; in addition, farmers generally have to pay higher prices for agricultural inputs in remote areas, reducing their profit margins - text: This project aims to construct a desalination plant in the capital city in order to respond directly to drinking water supply needs. This new plant, which will have a capacity of 22,500 m3 daily, easily expandable to 45,000 m3, will be fuelled by renewable energy, which is expected to be provided by a wind farm planned for the second phase of the project. Funding: European Union. Rural Community Development and Water Mobilization Project (PRODERMO). pipeline_tag: text-classification inference: false base_model: sentence-transformers/all-mpnet-base-v2 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 18 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("leavoigt/vulnerability_multilabel_updated") # Run inference preds = model("Workers in the formal sector. Formal sector workers also face economic risks. A number of them experience income instability due to contractualization, retrenchment, and firm closures. In 2014, contractual workers accounted for 22 percent of the total 4.5 million workers employed in establishments with 20 or more employees.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 21 | 72.6472 | 238 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 0) - max_steps: -1 - sampling_strategy: undersampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0006 | 1 | 0.1906 | - | | 0.0316 | 50 | 0.1275 | 0.1394 | | 0.0631 | 100 | 0.0851 | 0.1247 | | 0.0947 | 150 | 0.0959 | 0.1269 | | 0.1263 | 200 | 0.1109 | 0.1179 | | 0.1578 | 250 | 0.0923 | 0.1354 | | 0.1894 | 300 | 0.063 | 0.1292 | | 0.2210 | 350 | 0.0555 | 0.1326 | | 0.2525 | 400 | 0.0362 | 0.1127 | | 0.2841 | 450 | 0.0582 | 0.132 | | 0.3157 | 500 | 0.0952 | 0.1339 | | 0.3472 | 550 | 0.0793 | 0.1171 | | 0.3788 | 600 | 0.059 | 0.1187 | | 0.4104 | 650 | 0.0373 | 0.1131 | | 0.4419 | 700 | 0.0593 | 0.1144 | | 0.4735 | 750 | 0.0405 | 0.1174 | | 0.5051 | 800 | 0.0284 | 0.1196 | | 0.5366 | 850 | 0.0329 | 0.1116 | | 0.5682 | 900 | 0.0895 | 0.1193 | | 0.5997 | 950 | 0.0576 | 0.1159 | | 0.6313 | 1000 | 0.0385 | 0.1203 | | 0.6629 | 1050 | 0.0842 | 0.1195 | | 0.6944 | 1100 | 0.0274 | 0.113 | | 0.7260 | 1150 | 0.0226 | 0.1137 | | 0.7576 | 1200 | 0.0276 | 0.1204 | | 0.7891 | 1250 | 0.0355 | 0.1163 | | 0.8207 | 1300 | 0.077 | 0.1161 | | 0.8523 | 1350 | 0.0735 | 0.1135 | | 0.8838 | 1400 | 0.0357 | 0.1175 | | 0.9154 | 1450 | 0.0313 | 0.1207 | | 0.9470 | 1500 | 0.0241 | 0.1159 | | 0.9785 | 1550 | 0.0339 | 0.1161 | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.38.1 - PyTorch: 2.1.0+cu121 - Datasets: 2.3.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
vicgalle/TruthfulQwen1.5-1.8B
vicgalle
2024-03-04T14:24:14Z
48
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "dataset:jondurbin/truthy-dpo-v0.1", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-01T19:55:14Z
--- library_name: transformers license: apache-2.0 datasets: - jondurbin/truthy-dpo-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
malhajar/meditron-7b-chat
malhajar
2024-03-04T14:19:59Z
73
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Medicine", "en", "dataset:yahma/alpaca-cleaned", "base_model:epfl-llm/meditron-7b", "base_model:finetune:epfl-llm/meditron-7b", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T12:10:33Z
--- language: - en license: llama2 tags: - Medicine datasets: - yahma/alpaca-cleaned base_model: epfl-llm/meditron-7b model-index: - name: meditron-7b-chat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 50.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 75.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 40.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.56 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 9.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/meditron-7b-chat name: Open LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> meditron-7b-chat is a finetuned version of [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) using SFT Training on the Alpaca Dataset. This model can answer information about different excplicit ideas in medicine (see [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) for more info) ### Model Description - **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/) - **Language(s) (NLP):** English - **Finetuned from model:** [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## How to Get Started with the Model Use the code sample provided in the original post to interact with the model. ```python from transformers import AutoTokenizer,AutoModelForCausalLM model_id = "malhajar/meditron-7b-chat" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", torch_dtype=torch.float16, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_id) question: "what is tract infection?" # For generating a response prompt = ''' ### Instruction: {question} ### Response:''' input_ids = tokenizer(prompt, return_tensors="pt").input_ids output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True, top_p=0.95) response = tokenizer.decode(output[0]) print(response) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_malhajar__meditron-7b-chat) | Metric |Value| |---------------------------------|----:| |Avg. |49.59| |AI2 Reasoning Challenge (25-Shot)|50.77| |HellaSwag (10-Shot) |75.37| |MMLU (5-Shot) |40.49| |TruthfulQA (0-shot) |48.56| |Winogrande (5-shot) |73.16| |GSM8k (5-shot) | 9.17|
malhajar/Mistral-7B-v0.2-meditron-turkish
malhajar
2024-03-04T14:19:13Z
4,196
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "tr", "en", "dataset:malhajar/meditron-tr", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T06:39:28Z
--- language: - tr - en license: apache-2.0 datasets: - malhajar/meditron-tr model-index: - name: Mistral-7B-v0.2-meditron-turkish results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.56 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.19 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 35.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish name: Open LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Mistral-7B-v0.2-meditron-turkish is a finetuned Mistral Model version using Freeze technique on Turkish Meditron dataset of [`malhajar/meditron-7b-tr`](https://huggingface.co/datasets/malhajar/meditron-tr) using SFT Training. This model can answer information about different excplicit ideas in medicine in Turkish and English ### Model Description - **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/) - **Language(s) (NLP):** Turkish,English - **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### Prompt Template For Turkish Generation ``` ### Kullancı: ``` ### Prompt Template For English Generation ``` ### User: ``` ## How to Get Started with the Model Use the code sample provided in the original post to interact with the model. ```python from transformers import AutoTokenizer,AutoModelForCausalLM model_id = "malhajar/Mistral-7B-v0.2-meditron-turkish" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", torch_dtype=torch.float16, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_id) question: "Akciğer kanseri nedir?" # For generating a response prompt = ''' ### Kullancı: {question} ''' input_ids = tokenizer(prompt, return_tensors="pt").input_ids output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True, top_p=0.95) response = tokenizer.decode(output[0]) print(response) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_malhajar__Mistral-7B-v0.2-meditron-turkish) | Metric |Value| |---------------------------------|----:| |Avg. |63.34| |AI2 Reasoning Challenge (25-Shot)|59.56| |HellaSwag (10-Shot) |81.79| |MMLU (5-Shot) |60.35| |TruthfulQA (0-shot) |66.19| |Winogrande (5-shot) |76.24| |GSM8k (5-shot) |35.94|
manojpreveen/Llama-2-70b-hf-ChatOrca-v2
manojpreveen
2024-03-04T14:15:58Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:iamplus/LLama2-SFT-Data", "dataset:iamplus/Open_Platypus_Orca", "dataset:iamplus/Orca", "dataset:iamplus/Conversational_Data", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T14:15:58Z
--- license: mit datasets: - iamplus/LLama2-SFT-Data - iamplus/Open_Platypus_Orca - iamplus/Orca - iamplus/Conversational_Data --- **Description :** This model is trained on a mix of Orca data and Open Source + Closed Multi-turn Conversation data to create a better reasoning model which is capable of holding multi-turn conversations as well. The Dataset split description, Prompt description as well as Training Parameters are given below. **Prompt Description :** The prompt template for the first turn looks like this: ``` <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST] ``` The prompt template for the multi-turn conversation looks like this: ``` <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] ``` This model follows the official Meta's chat model Prompt format. Please refer here : https://huggingface.co/blog/llama2#how-to-prompt-llama-2 on how to prompt the model for single/multi-turn conversations. **Base model :** meta-llama/Llama-2-70b-hf **Data :** 1. 1M Orca dara (Gpt-4 Orca data - OpenOrca) 2. 1.7M chat data (includes OpenAssistant Chat data, Ultrachat, and many more open source Chat Datasets) 3. 30k OpenPlatypus data **Training Params :** ``` Number of Epochs : 2 Batch Size : 64 Sequence Length : 4096 Learning Rate : 2e-5 (Cosine) Weight Decay : 0.1 Gradient Clipping : 1.0 Gamma : 0.85 beta_1 : 0.9 beta_2 : 0.95 eps : 1e-5 Precision : bf16 Optimizer : Any Precision AdamW Optimizer ```
Amu/r-zephyr-7b-beta-qlora
Amu
2024-03-04T14:11:10Z
0
0
peft
[ "peft", "pytorch", "mistral", "alignment-handbook", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-02-12T14:11:12Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized base_model: mistralai/Mistral-7B-v0.1 model-index: - name: r-zephyr-7b-beta-qlora results: [] --- # r-zephyr-7b-beta-qlora The 'r' means replicate. This model is a model replicated by using https://github.com/huggingface/alignment-handbook. This model is a fine-tuned version on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.5232 - Rewards/chosen: -0.9374 - Rewards/rejected: -1.7181 - Rewards/accuracies: 0.7734 - Rewards/margins: 0.7807 - Logps/rejected: -420.1122 - Logps/chosen: -341.2448 - Logits/rejected: 0.6190 - Logits/chosen: 0.6345 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5917 | 0.21 | 100 | 0.5950 | -0.3904 | -0.7775 | 0.7109 | 0.3872 | -326.0618 | -286.5451 | -1.9790 | -1.9769 | | 0.5281 | 0.42 | 200 | 0.5492 | -0.8657 | -1.6137 | 0.7617 | 0.7479 | -409.6739 | -334.0814 | -0.2289 | -0.2367 | | 0.5321 | 0.63 | 300 | 0.5321 | -0.7444 | -1.4427 | 0.7734 | 0.6983 | -392.5731 | -321.9463 | 0.3829 | 0.3741 | | 0.5149 | 0.84 | 400 | 0.5233 | -0.9570 | -1.7432 | 0.7617 | 0.7862 | -422.6298 | -343.2071 | 0.6479 | 0.6688 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_amu__r-zephyr-7b-beta-qlora) | Metric |Value| |---------------------------------|----:| |Avg. |62.70| |AI2 Reasoning Challenge (25-Shot)|63.05| |HellaSwag (10-Shot) |85.38| |MMLU (5-Shot) |63.10| |TruthfulQA (0-shot) |46.32| |Winogrande (5-shot) |79.32| |GSM8k (5-shot) |39.04|
LN1996/output_run_2
LN1996
2024-03-04T14:10:40Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "lora", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-03-04T13:51:59Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - diffusers - lora - stable-diffusion - stable-diffusion-diffusers inference: true base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a room with professional interior design --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - LN1996/output_run_2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a room with professional interior design using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
kevinxqiu/calculator_model_test
kevinxqiu
2024-03-04T14:10:18Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T13:57:48Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6102 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4688 | 1.0 | 5 | 2.8659 | | 2.5395 | 2.0 | 10 | 2.1007 | | 1.9592 | 3.0 | 15 | 1.7491 | | 1.6895 | 4.0 | 20 | 1.5873 | | 1.5985 | 5.0 | 25 | 1.5268 | | 1.5444 | 6.0 | 30 | 1.5378 | | 1.5074 | 7.0 | 35 | 1.4607 | | 1.4651 | 8.0 | 40 | 1.4537 | | 1.4525 | 9.0 | 45 | 1.4718 | | 1.4429 | 10.0 | 50 | 1.3729 | | 1.3852 | 11.0 | 55 | 1.3508 | | 1.3357 | 12.0 | 60 | 1.2872 | | 1.2972 | 13.0 | 65 | 1.2634 | | 1.2483 | 14.0 | 70 | 1.1977 | | 1.1983 | 15.0 | 75 | 1.1583 | | 1.1462 | 16.0 | 80 | 1.0801 | | 1.0846 | 17.0 | 85 | 1.0128 | | 1.055 | 18.0 | 90 | 1.0017 | | 1.0239 | 19.0 | 95 | 0.9550 | | 0.9661 | 20.0 | 100 | 0.9190 | | 0.9551 | 21.0 | 105 | 0.8978 | | 0.9148 | 22.0 | 110 | 0.8516 | | 0.887 | 23.0 | 115 | 0.8381 | | 0.8872 | 24.0 | 120 | 0.8330 | | 0.8642 | 25.0 | 125 | 0.8257 | | 0.8441 | 26.0 | 130 | 0.7729 | | 0.8151 | 27.0 | 135 | 0.7500 | | 0.7906 | 28.0 | 140 | 0.7392 | | 0.7797 | 29.0 | 145 | 0.7292 | | 0.7707 | 30.0 | 150 | 0.7043 | | 0.7475 | 31.0 | 155 | 0.6987 | | 0.74 | 32.0 | 160 | 0.6696 | | 0.7274 | 33.0 | 165 | 0.6536 | | 0.7121 | 34.0 | 170 | 0.6459 | | 0.7113 | 35.0 | 175 | 0.6505 | | 0.7016 | 36.0 | 180 | 0.6299 | | 0.6946 | 37.0 | 185 | 0.6234 | | 0.6886 | 38.0 | 190 | 0.6157 | | 0.6821 | 39.0 | 195 | 0.6146 | | 0.6768 | 40.0 | 200 | 0.6102 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mayacinka/yam-jom-7B-ties
mayacinka
2024-03-04T14:08:08Z
49
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "yam-peleg/Experiment26-7B", "base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "base_model:merge:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "base_model:yam-peleg/Experiment26-7B", "base_model:merge:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-03T05:37:41Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 - yam-peleg/Experiment26-7B base_model: - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 - yam-peleg/Experiment26-7B model-index: - name: yam-jom-7B-ties results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.21 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.05 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.51 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties name: Open LLM Leaderboard --- # yam-jom-7B-ties yam-jom-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2) * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) ## 🧩 Configuration ```yaml models: - model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 parameters: weight: 0.35 - model: yam-peleg/Experiment26-7B parameters: weight: 0.65 base_model: yam-peleg/Experiment26-7B merge_method: ties dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/yam-jom-7B-ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B-ties) | Metric |Value| |---------------------------------|----:| |Avg. |76.44| |AI2 Reasoning Challenge (25-Shot)|73.21| |HellaSwag (10-Shot) |89.05| |MMLU (5-Shot) |64.77| |TruthfulQA (0-shot) |77.51| |Winogrande (5-shot) |84.53| |GSM8k (5-shot) |69.60|
mayacinka/West-Ramen-7Bx4
mayacinka
2024-03-04T14:07:44Z
47
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T22:09:27Z
--- license: apache-2.0 model-index: - name: West-Ramen-7Bx4 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard --- ## 🧩 Configuration ```yaml base_model: /home/Ubuntu/Desktop/mergekit/models/Mistral-7B-Instruct-v0.2 gate_mode: hidden dtype: bfloat16 experts: - source_model: /home/Ubuntu/Desktop/mergekit/models/Mistral-7B-Instruct-v0.2 positive_prompts: - "instructions" - "concise" - "straightforward" - "helpful" - "assistant" negative_prompts: - "vague" - "inaccurate" - "verbose" - "complicated" - "speculative" - source_model: /home/Ubuntu/Desktop/mergekit/models/NeuralOmniWestBeaglake-7B positive_prompts: - "storytelling" - "role play" - "imagine" - "artistic" - "narrative" - source_model: /home/Ubuntu/Desktop/mergekit/models/Kunoichi-DPO-v2-7B positive_prompts: - "reason" - "think step by step" - "logic" - "knowledge" negative_prompts: - "artistic" - "speculative" - "playful" - source_model: /home/Ubuntu/Desktop/mergekit/models/Starling-LM-7B-alpha positive_prompts: - "code" - "python" - "javascript" - "react" - "clear" - "programming" negative_prompts: - "error" - "art" - "role play" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/West-Ramen-7Bx4" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__West-Ramen-7Bx4) | Metric |Value| |---------------------------------|----:| |Avg. |69.33| |AI2 Reasoning Challenge (25-Shot)|67.58| |HellaSwag (10-Shot) |85.52| |MMLU (5-Shot) |62.69| |TruthfulQA (0-shot) |61.00| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |58.00|
kaitchup/Llama-2-7b-mt-German-to-English
kaitchup
2024-03-04T14:07:29Z
6
3
peft
[ "peft", "translation", "en", "de", "dataset:kaitchup/opus-German-to-English", "license:mit", "region:us" ]
translation
2023-10-26T16:48:38Z
--- library_name: peft license: mit language: - en - de datasets: - kaitchup/opus-German-to-English tags: - translation --- # Model Card for Model ID This is an adapter for Meta's Llama 2 7B fine-tuned for translating German text into English. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **Model type:** LoRA Adapter for Llama 2 7B - **Language(s) (NLP):** German, English - **License:** MIT license ## Uses This adapter must be loaded on top of Llama 2 7B. It has been fine-tuned with QLoRA. For optimal results, the base model must be loaded with the exact same configuration used during fine-tuning. You can use the following code to load the model: ``` from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch from peft import PeftModel base_model = "meta-llama/Llama-2-7b-hf" compute_dtype = getattr(torch, "float16") bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=True, ) model = AutoModelForCausalLM.from_pretrained( base_model, device_map={"": 0}, quantization_config=bnb_config ) tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True) model = PeftModel.from_pretrained(model, "kaitchup/Llama-2-7b-mt-German-to-English") ``` Then, run the model as follows: ``` my_text = "" #put your text to translate here prompt = my_text+" ###>" tokenized_input = tokenizer(prompt, return_tensors="pt") input_ids = tokenized_input["input_ids"].cuda() generation_output = model.generate( input_ids=input_ids, num_beams=10, return_dict_in_generate=True, output_scores=True, max_new_tokens=130 ) for seq in generation_output.sequences: output = tokenizer.decode(seq, skip_special_tokens=True) print(output.split("###>")[1].strip()) ``` ## Model Card Contact [The Kaitchup](https://kaitchup.substack.com/)
mayacinka/yam-jom-7B-slerp
mayacinka
2024-03-04T14:07:04Z
115
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "yam-peleg/Experiment26-7B", "base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "base_model:merge:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2", "base_model:yam-peleg/Experiment26-7B", "base_model:merge:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-03T05:53:13Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 - yam-peleg/Experiment26-7B base_model: - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 - yam-peleg/Experiment26-7B model-index: - name: yam-jom-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.77 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp name: Open LLM Leaderboard --- # yam-jom-7B-slerp yam-jom-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2) * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 layer_range: [0, 32] - model: yam-peleg/Experiment26-7B layer_range: [0, 32] base_model: yam-peleg/Experiment26-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors merge_method: slerp dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/yam-jom-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |76.45| |AI2 Reasoning Challenge (25-Shot)|72.70| |HellaSwag (10-Shot) |89.02| |MMLU (5-Shot) |64.64| |TruthfulQA (0-shot) |77.77| |Winogrande (5-shot) |84.69| |GSM8k (5-shot) |69.90|
mayacinka/NeuralZephyr-Beagle-7B
mayacinka
2024-03-04T14:06:16Z
16
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:CultriX/NeuralTrix-7B-dpo", "base_model:finetune:CultriX/NeuralTrix-7B-dpo", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T17:15:00Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: CultriX/NeuralTrix-7B-dpo model-index: - name: NeuralZephyr-Beagle-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.6 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.17 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/NeuralZephyr-Beagle-7B name: Open LLM Leaderboard --- ![logo](thumbn.png) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Code credit: [this excellent medium blog](https://medium.com/towards-data-science/merge-large-language-models-with-mergekit-2118fb392b54) ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using CultriX/NeuralTrix-7B-dpo as a base. ### Models Merged The following models were included in the merge: * mlabonne/NeuralBeagle14-7B * HuggingFaceH4/zephyr-7b-alpha ### Benchmarks #### Open LLM Leaderboard | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | ------------------------------ | ------- | ---- | --------- | ----- | ---------- | ---------- | ----- | | mayacinka/NeuralZephyr-Beagle-7B | 71.57 | 68.6 | 86.38 | 64.67 | 65.17 | 81.14 | 63.46 | ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: CultriX/NeuralTrix-7B-dpo - model: HuggingFaceH4/zephyr-7b-alpha parameters: density: 0.83 weight: 0.4 - model: mlabonne/NeuralBeagle14-7B parameters: density: 0.83 weight: 0.6 merge_method: dare_ties base_model: CultriX/NeuralTrix-7B-dpo parameters: int8_mask: true dtype: bfloat16 ``` # Inference ```python # pip install transformers from transformers import AutoTokenizer import transformers import torch model = "mayacinka/NeuralZephyr-Beagle-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__NeuralZephyr-Beagle-7B) | Metric |Value| |---------------------------------|----:| |Avg. |71.57| |AI2 Reasoning Challenge (25-Shot)|68.60| |HellaSwag (10-Shot) |86.38| |MMLU (5-Shot) |64.67| |TruthfulQA (0-shot) |65.17| |Winogrande (5-shot) |81.14| |GSM8k (5-shot) |63.46|
mayacinka/ramonda-7b-dpo-ties
mayacinka
2024-03-04T14:06:05Z
137
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "paulml/OGNO-7B", "bardsai/jaskier-7b-dpo-v4.3", "base_model:bardsai/jaskier-7b-dpo-v4.3", "base_model:merge:bardsai/jaskier-7b-dpo-v4.3", "base_model:paulml/OGNO-7B", "base_model:merge:paulml/OGNO-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-17T21:22:39Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - paulml/OGNO-7B - bardsai/jaskier-7b-dpo-v4.3 base_model: - paulml/OGNO-7B - bardsai/jaskier-7b-dpo-v4.3 model-index: - name: Buttercup-7b-dpo-ties results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.17 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.92 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/Buttercup-7b-dpo-ties name: Open LLM Leaderboard --- ![thumbnail](thumb.webp) # ramonda-7b-dpo-ties ramonda-7b-dpo-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B) * [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3) ## Benchmark [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |------------------------|--------:|-----:|----------:|-----:|-----------:|-----------:|------:| | mayacinka/ramonda-7b-dpo-ties | 76.19 | 72.7 | 89.69| 64.5 | 77.17 | 84.77 | 68.92| [LLM AutoEval](https://gist.github.com/majacinka/370282a808a21b28bacd2c76a998da8f) | Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average | |----------------------|---------|---------|------------|----------|---------| | ramonda-7b-dpo-ties | 44.67 | 77.16 | 77.6 | 49.06 | 62.12 | ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 # no parameters necessary for base model - model: paulml/OGNO-7B parameters: density: 0.9 weight: 0.5 - model: bardsai/jaskier-7b-dpo-v4.3 parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: normalize: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/ramonda-7b-dpo-ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__Buttercup-7b-dpo-ties) | Metric |Value| |---------------------------------|----:| |Avg. |76.19| |AI2 Reasoning Challenge (25-Shot)|72.70| |HellaSwag (10-Shot) |89.09| |MMLU (5-Shot) |64.50| |TruthfulQA (0-shot) |77.17| |Winogrande (5-shot) |84.77| |GSM8k (5-shot) |68.92|
DouglasPontes/2020-Q4-50p-filtered-prog_from_Q3
DouglasPontes
2024-03-04T14:04:49Z
19
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-2019-90m", "base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-26T14:38:39Z
--- license: mit base_model: cardiffnlp/twitter-roberta-base-2019-90m tags: - generated_from_trainer model-index: - name: 2020-Q4-50p-filtered-random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2020-Q4-50p-filtered-random This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2570 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | No log | 0.03 | 8000 | 2.5888 | | 2.8176 | 0.07 | 16000 | 2.4814 | | 2.8176 | 0.1 | 24000 | 2.4264 | | 2.5609 | 0.13 | 32000 | 2.3993 | | 2.5609 | 0.17 | 40000 | 2.3761 | | 2.4969 | 0.2 | 48000 | 2.3624 | | 2.4969 | 0.24 | 56000 | 2.3481 | | 2.48 | 0.27 | 64000 | 2.3399 | | 2.48 | 0.3 | 72000 | 2.3289 | | 2.451 | 0.34 | 80000 | 2.3221 | | 2.451 | 0.37 | 88000 | 2.3183 | | 2.4367 | 0.4 | 96000 | 2.3221 | | 2.4367 | 0.44 | 104000 | 2.3142 | | 2.4388 | 0.47 | 112000 | 2.3028 | | 2.4388 | 0.51 | 120000 | 2.3066 | | 2.4215 | 0.54 | 128000 | 2.3013 | | 2.4215 | 0.57 | 136000 | 2.3039 | | 2.4178 | 0.61 | 144000 | 2.2907 | | 2.4178 | 0.64 | 152000 | 2.2996 | | 2.4103 | 0.67 | 160000 | 2.2943 | | 2.4103 | 0.71 | 168000 | 2.2900 | | 2.4122 | 0.74 | 176000 | 2.2902 | | 2.4122 | 0.77 | 184000 | 2.2961 | | 2.4173 | 0.81 | 192000 | 2.2906 | | 2.4173 | 0.84 | 200000 | 2.2925 | | 2.4067 | 0.88 | 208000 | 2.2911 | | 2.4067 | 0.91 | 216000 | 2.2844 | | 2.4059 | 0.94 | 224000 | 2.2855 | | 2.4059 | 0.98 | 232000 | 2.2811 | | 2.4089 | 1.01 | 240000 | 2.2788 | | 2.4089 | 1.04 | 248000 | 2.2796 | | 2.4034 | 1.08 | 256000 | 2.2827 | | 2.4034 | 1.11 | 264000 | 2.2803 | | 2.408 | 1.15 | 272000 | 2.2746 | | 2.408 | 1.18 | 280000 | 2.2851 | | 2.3985 | 1.21 | 288000 | 2.2781 | | 2.3985 | 1.25 | 296000 | 2.2795 | | 2.4009 | 1.28 | 304000 | 2.2777 | | 2.4009 | 1.31 | 312000 | 2.2770 | | 2.4017 | 1.35 | 320000 | 2.2763 | | 2.4017 | 1.38 | 328000 | 2.2734 | | 2.4056 | 1.41 | 336000 | 2.2758 | | 2.4056 | 1.45 | 344000 | 2.2763 | | 2.4017 | 1.48 | 352000 | 2.2700 | | 2.4017 | 1.52 | 360000 | 2.2736 | | 2.3993 | 1.55 | 368000 | 2.2763 | | 2.3993 | 1.58 | 376000 | 2.2792 | | 2.3994 | 1.62 | 384000 | 2.2666 | | 2.3994 | 1.65 | 392000 | 2.2699 | | 2.3969 | 1.68 | 400000 | 2.2753 | | 2.3969 | 1.72 | 408000 | 2.2707 | | 2.4094 | 1.75 | 416000 | 2.2731 | | 2.4094 | 1.79 | 424000 | 2.2709 | | 2.4102 | 1.82 | 432000 | 2.2623 | | 2.4102 | 1.85 | 440000 | 2.2751 | | 2.4042 | 1.89 | 448000 | 2.2728 | | 2.4042 | 1.92 | 456000 | 2.2714 | | 2.3991 | 1.95 | 464000 | 2.2634 | | 2.3991 | 1.99 | 472000 | 2.2695 | | 2.3976 | 2.02 | 480000 | 2.2731 | | 2.3976 | 2.05 | 488000 | 2.2736 | | 2.4019 | 2.09 | 496000 | 2.2803 | | 2.4019 | 2.12 | 504000 | 2.2699 | | 2.4044 | 2.16 | 512000 | 2.2731 | | 2.4044 | 2.19 | 520000 | 2.2709 | | 2.3989 | 2.22 | 528000 | 2.2716 | | 2.3989 | 2.26 | 536000 | 2.2668 | | 2.4068 | 2.29 | 544000 | 2.2728 | | 2.4068 | 2.32 | 552000 | 2.2709 | | 2.4047 | 2.36 | 560000 | 2.2683 | | 2.4047 | 2.39 | 568000 | 2.2731 | | 2.3976 | 2.43 | 576000 | 2.2676 | | 2.3976 | 2.46 | 584000 | 2.2736 | | 2.3994 | 2.49 | 592000 | 2.2624 | | 2.3994 | 2.53 | 600000 | 2.2773 | | 2.3997 | 2.56 | 608000 | 2.2719 | | 2.3997 | 2.59 | 616000 | 2.2701 | | 2.4042 | 2.63 | 624000 | 2.2695 | | 2.4042 | 2.66 | 632000 | 2.2666 | | 2.3994 | 2.69 | 640000 | 2.2719 | | 2.3994 | 2.73 | 648000 | 2.2686 | | 2.3953 | 2.76 | 656000 | 2.2623 | | 2.3953 | 2.8 | 664000 | 2.2662 | | 2.402 | 2.83 | 672000 | 2.2707 | | 2.402 | 2.86 | 680000 | 2.2662 | | 2.3929 | 2.9 | 688000 | 2.2726 | | 2.3929 | 2.93 | 696000 | 2.2682 | | 2.3977 | 2.96 | 704000 | 2.2634 | | 2.3977 | 3.0 | 712000 | 2.2685 | | 2.4022 | 3.03 | 720000 | 2.2693 | | 2.4022 | 3.07 | 728000 | 2.2666 | | 2.4046 | 3.1 | 736000 | 2.2690 | | 2.4046 | 3.13 | 744000 | 2.2641 | | 2.3977 | 3.17 | 752000 | 2.2658 | | 2.3977 | 3.2 | 760000 | 2.2645 | | 2.4015 | 3.23 | 768000 | 2.2619 | | 2.4015 | 3.27 | 776000 | 2.2671 | | 2.393 | 3.3 | 784000 | 2.2694 | | 2.393 | 3.33 | 792000 | 2.2662 | | 2.3907 | 3.37 | 800000 | 2.2691 | | 2.3907 | 3.4 | 808000 | 2.2679 | | 2.3987 | 3.44 | 816000 | 2.2688 | | 2.3987 | 3.47 | 824000 | 2.2655 | | 2.4116 | 3.5 | 832000 | 2.2668 | | 2.4116 | 3.54 | 840000 | 2.2675 | | 2.3913 | 3.57 | 848000 | 2.2689 | | 2.3913 | 3.6 | 856000 | 2.2642 | | 2.3974 | 3.64 | 864000 | 2.2667 | | 2.3974 | 3.67 | 872000 | 2.2717 | | 2.4046 | 3.71 | 880000 | 2.2661 | | 2.4046 | 3.74 | 888000 | 2.2705 | | 2.4006 | 3.77 | 896000 | 2.2637 | | 2.4006 | 3.81 | 904000 | 2.2635 | | 2.3987 | 3.84 | 912000 | 2.2642 | | 2.3987 | 3.87 | 920000 | 2.2691 | | 2.4068 | 3.91 | 928000 | 2.2689 | | 2.4068 | 3.94 | 936000 | 2.2730 | | 2.4092 | 3.97 | 944000 | 2.2644 | | 2.4092 | 4.01 | 952000 | 2.2706 | | 2.4035 | 4.04 | 960000 | 2.2671 | | 2.4035 | 4.08 | 968000 | 2.2562 | | 2.4005 | 4.11 | 976000 | 2.2622 | | 2.4005 | 4.14 | 984000 | 2.2642 | | 2.406 | 4.18 | 992000 | 2.2625 | | 2.406 | 4.21 | 1000000 | 2.2662 | | 2.3972 | 4.24 | 1008000 | 2.2658 | | 2.3972 | 4.28 | 1016000 | 2.2668 | | 2.3937 | 4.31 | 1024000 | 2.2593 | | 2.3937 | 4.35 | 1032000 | 2.2712 | | 2.3982 | 4.38 | 1040000 | 2.2695 | | 2.3982 | 4.41 | 1048000 | 2.2684 | | 2.4034 | 4.45 | 1056000 | 2.2643 | | 2.4034 | 4.48 | 1064000 | 2.2665 | | 2.3996 | 4.51 | 1072000 | 2.2692 | | 2.3996 | 4.55 | 1080000 | 2.2628 | | 2.4054 | 4.58 | 1088000 | 2.2673 | | 2.4054 | 4.61 | 1096000 | 2.2577 | | 2.4039 | 4.65 | 1104000 | 2.2671 | | 2.4039 | 4.68 | 1112000 | 2.2586 | | 2.4033 | 4.72 | 1120000 | 2.2730 | | 2.4033 | 4.75 | 1128000 | 2.2655 | | 2.4036 | 4.78 | 1136000 | 2.2694 | | 2.4036 | 4.82 | 1144000 | 2.2630 | | 2.4036 | 4.85 | 1152000 | 2.2618 | | 2.4036 | 4.88 | 1160000 | 2.2665 | | 2.4005 | 4.92 | 1168000 | 2.2609 | | 2.4005 | 4.95 | 1176000 | 2.2617 | | 2.4065 | 4.99 | 1184000 | 2.2646 | | 2.4065 | 5.02 | 1192000 | 2.2606 | | 2.4044 | 5.05 | 1200000 | 2.2656 | | 2.4044 | 5.09 | 1208000 | 2.2630 | | 2.3997 | 5.12 | 1216000 | 2.2737 | | 2.3997 | 5.15 | 1224000 | 2.2762 | | 2.407 | 5.19 | 1232000 | 2.2669 | | 2.407 | 5.22 | 1240000 | 2.2695 | | 2.4013 | 5.25 | 1248000 | 2.2704 | | 2.4013 | 5.29 | 1256000 | 2.2612 | | 2.4118 | 5.32 | 1264000 | 2.2654 | | 2.4118 | 5.36 | 1272000 | 2.2683 | | 2.3953 | 5.39 | 1280000 | 2.2628 | | 2.3953 | 5.42 | 1288000 | 2.2605 | | 2.3973 | 5.46 | 1296000 | 2.2667 | | 2.3973 | 5.49 | 1304000 | 2.2640 | | 2.4027 | 5.52 | 1312000 | 2.2619 | | 2.4027 | 5.56 | 1320000 | 2.2687 | | 2.3967 | 5.59 | 1328000 | 2.2598 | | 2.3967 | 5.63 | 1336000 | 2.2621 | | 2.4028 | 5.66 | 1344000 | 2.2602 | | 2.4028 | 5.69 | 1352000 | 2.2713 | | 2.4053 | 5.73 | 1360000 | 2.2623 | | 2.4053 | 5.76 | 1368000 | 2.2697 | | 2.3987 | 5.79 | 1376000 | 2.2638 | | 2.3987 | 5.83 | 1384000 | 2.2601 | | 2.3987 | 5.86 | 1392000 | 2.2642 | | 2.3987 | 5.89 | 1400000 | 2.2656 | | 2.401 | 5.93 | 1408000 | 2.2712 | | 2.401 | 5.96 | 1416000 | 2.2639 | | 2.4011 | 6.0 | 1424000 | 2.2646 | | 2.4011 | 6.03 | 1432000 | 2.2669 | | 2.4022 | 6.06 | 1440000 | 2.2619 | | 2.4022 | 6.1 | 1448000 | 2.2580 | | 2.3998 | 6.13 | 1456000 | 2.2612 | | 2.3998 | 6.16 | 1464000 | 2.2652 | | 2.3999 | 6.2 | 1472000 | 2.2610 | | 2.3999 | 6.23 | 1480000 | 2.2567 | | 2.3984 | 6.27 | 1488000 | 2.2590 | | 2.3984 | 6.3 | 1496000 | 2.2565 | | 2.4017 | 6.33 | 1504000 | 2.2658 | | 2.4017 | 6.37 | 1512000 | 2.2626 | | 2.4055 | 6.4 | 1520000 | 2.2656 | | 2.4055 | 6.43 | 1528000 | 2.2622 | | 2.3959 | 6.47 | 1536000 | 2.2691 | | 2.3959 | 6.5 | 1544000 | 2.2604 | | 2.4016 | 6.53 | 1552000 | 2.2599 | | 2.4016 | 6.57 | 1560000 | 2.2655 | | 2.3986 | 6.6 | 1568000 | 2.2684 | | 2.3986 | 6.64 | 1576000 | 2.2716 | | 2.4051 | 6.67 | 1584000 | 2.2605 | | 2.4051 | 6.7 | 1592000 | 2.2569 | | 2.4057 | 6.74 | 1600000 | 2.2687 | | 2.4057 | 6.77 | 1608000 | 2.2571 | | 2.3956 | 6.8 | 1616000 | 2.2664 | | 2.3956 | 6.84 | 1624000 | 2.2612 | | 2.4048 | 6.87 | 1632000 | 2.2643 | | 2.4048 | 6.91 | 1640000 | 2.2633 | | 2.4042 | 6.94 | 1648000 | 2.2634 | | 2.4042 | 6.97 | 1656000 | 2.2637 | | 2.4008 | 7.01 | 1664000 | 2.2619 | | 2.4008 | 7.04 | 1672000 | 2.2579 | | 2.397 | 7.07 | 1680000 | 2.2628 | | 2.397 | 7.11 | 1688000 | 2.2593 | | 2.4044 | 7.14 | 1696000 | 2.2593 | | 2.4044 | 7.17 | 1704000 | 2.2613 | | 2.3979 | 7.21 | 1712000 | 2.2685 | | 2.3979 | 7.24 | 1720000 | 2.2683 | | 2.4017 | 7.28 | 1728000 | 2.2611 | | 2.4017 | 7.31 | 1736000 | 2.2672 | | 2.4017 | 7.34 | 1744000 | 2.2577 | | 2.4017 | 7.38 | 1752000 | 2.2609 | | 2.4018 | 7.41 | 1760000 | 2.2567 | | 2.4018 | 7.44 | 1768000 | 2.2661 | | 2.3905 | 7.48 | 1776000 | 2.2671 | | 2.3905 | 7.51 | 1784000 | 2.2663 | | 2.4063 | 7.55 | 1792000 | 2.2619 | | 2.4063 | 7.58 | 1800000 | 2.2587 | | 2.4015 | 7.61 | 1808000 | 2.2584 | | 2.4015 | 7.65 | 1816000 | 2.2580 | | 2.3984 | 7.68 | 1824000 | 2.2586 | | 2.3984 | 7.71 | 1832000 | 2.2620 | | 2.3962 | 7.75 | 1840000 | 2.2584 | | 2.3962 | 7.78 | 1848000 | 2.2607 | | 2.3998 | 7.81 | 1856000 | 2.2638 | | 2.3998 | 7.85 | 1864000 | 2.2629 | | 2.4005 | 7.88 | 1872000 | 2.2716 | | 2.4005 | 7.92 | 1880000 | 2.2623 | | 2.4006 | 7.95 | 1888000 | 2.2555 | | 2.4006 | 7.98 | 1896000 | 2.2653 | | 2.3946 | 8.02 | 1904000 | 2.2629 | | 2.3946 | 8.05 | 1912000 | 2.2654 | | 2.3983 | 8.08 | 1920000 | 2.2623 | | 2.3983 | 8.12 | 1928000 | 2.2544 | | 2.4038 | 8.15 | 1936000 | 2.2605 | | 2.4038 | 8.19 | 1944000 | 2.2622 | | 2.399 | 8.22 | 1952000 | 2.2600 | | 2.399 | 8.25 | 1960000 | 2.2629 | | 2.3983 | 8.29 | 1968000 | 2.2621 | | 2.3983 | 8.32 | 1976000 | 2.2609 | | 2.4059 | 8.35 | 1984000 | 2.2705 | | 2.4059 | 8.39 | 1992000 | 2.2572 | | 2.4058 | 8.42 | 2000000 | 2.2602 | | 2.4058 | 8.45 | 2008000 | 2.2626 | | 2.3954 | 8.49 | 2016000 | 2.2668 | | 2.3954 | 8.52 | 2024000 | 2.2599 | | 2.3932 | 8.56 | 2032000 | 2.2643 | | 2.3932 | 8.59 | 2040000 | 2.2559 | | 2.4001 | 8.62 | 2048000 | 2.2614 | | 2.4001 | 8.66 | 2056000 | 2.2577 | | 2.3912 | 8.69 | 2064000 | 2.2665 | | 2.3912 | 8.72 | 2072000 | 2.2576 | | 2.4015 | 8.76 | 2080000 | 2.2672 | | 2.4015 | 8.79 | 2088000 | 2.2598 | | 2.4015 | 8.83 | 2096000 | 2.2599 | | 2.4015 | 8.86 | 2104000 | 2.2641 | | 2.399 | 8.89 | 2112000 | 2.2612 | | 2.399 | 8.93 | 2120000 | 2.2607 | | 2.3963 | 8.96 | 2128000 | 2.2633 | | 2.3963 | 8.99 | 2136000 | 2.2567 | | 2.3957 | 9.03 | 2144000 | 2.2630 | | 2.3957 | 9.06 | 2152000 | 2.2597 | | 2.3943 | 9.09 | 2160000 | 2.2624 | | 2.3943 | 9.13 | 2168000 | 2.2599 | | 2.4025 | 9.16 | 2176000 | 2.2578 | | 2.4025 | 9.2 | 2184000 | 2.2640 | | 2.3944 | 9.23 | 2192000 | 2.2562 | | 2.3944 | 9.26 | 2200000 | 2.2660 | | 2.3964 | 9.3 | 2208000 | 2.2556 | | 2.3964 | 9.33 | 2216000 | 2.2697 | | 2.4026 | 9.36 | 2224000 | 2.2652 | | 2.4026 | 9.4 | 2232000 | 2.2571 | | 2.398 | 9.43 | 2240000 | 2.2555 | | 2.398 | 9.47 | 2248000 | 2.2607 | | 2.4038 | 9.5 | 2256000 | 2.2558 | | 2.4038 | 9.53 | 2264000 | 2.2660 | | 2.4027 | 9.57 | 2272000 | 2.2587 | | 2.4027 | 9.6 | 2280000 | 2.2605 | | 2.4025 | 9.63 | 2288000 | 2.2578 | | 2.4025 | 9.67 | 2296000 | 2.2609 | | 2.3969 | 9.7 | 2304000 | 2.2597 | | 2.3969 | 9.73 | 2312000 | 2.2619 | | 2.3886 | 9.77 | 2320000 | 2.2645 | | 2.3886 | 9.8 | 2328000 | 2.2717 | | 2.3942 | 9.84 | 2336000 | 2.2627 | | 2.3942 | 9.87 | 2344000 | 2.2582 | | 2.396 | 9.9 | 2352000 | 2.2634 | | 2.396 | 9.94 | 2360000 | 2.2582 | | 2.3998 | 9.97 | 2368000 | 2.2643 | | 2.3998 | 10.0 | 2376000 | 2.2690 | | 2.4014 | 10.04 | 2384000 | 2.2655 | | 2.4014 | 10.07 | 2392000 | 2.2660 | | 2.4004 | 10.11 | 2400000 | 2.2650 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.0
priyanshunegi/my-dog-cxc
priyanshunegi
2024-03-04T14:04:27Z
2
2
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-03T06:32:14Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion pipeline_tag: text-to-image --- ### My-Dog-CXC Dreambooth model trained by priyanshunegi following the "Build your own Gen Al model" session by NxtWave. Project Submission Code : 22134503020 Sample pictures of this concept: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65e3268801d1420e34b042fe/8trmhY8cYkkPVIdtHgUjy.jpeg)
YusufTree/rl_course_vizdoom_health_gathering_supreme
YusufTree
2024-03-04T14:03:08Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T14:02:51Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.51 +/- 6.49 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r YusufTree/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
11sam11/my-dog-is-xzf
11sam11
2024-03-04T14:02:08Z
0
1
diffusers
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-04T13:56:00Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### my-dog-is-xzf Dreambooth model trained by 11sam11 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: TCEP034 Sample pictures of this concept: ![0](https://huggingface.co/11sam11/my-dog-is-xzf/resolve/main/sample_images/xzf_(1).jpg) ![1](https://huggingface.co/11sam11/my-dog-is-xzf/resolve/main/sample_images/xzf_(2).jpg)
machinelearningzuu/gemma-2b-biotech
machinelearningzuu
2024-03-04T14:01:18Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-01T05:26:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ProphetOfBostrom/Kyllene-57B-V1.0-imat-GGUF
ProphetOfBostrom
2024-03-04T13:54:21Z
0
0
null
[ "imatrix", "license:cc-by-nc-2.0", "region:us" ]
null
2024-03-04T09:00:47Z
--- license: cc-by-nc-2.0 tags: - imatrix --- #### nah, and it looks like the tokenizer on the source file's broken anyway. probably the base model too. loves `</s>` for some reason but Yi doesn't use that? made from [TeeZee/Kyllene-57B-v1.0.q6_k.gguf](/TeeZee/Kyllene-57B-v1.0-GGUF/blob/main/Kyllene-57B-v1.0.q6_K.gguf) no quants here to download. i did try. make it yourself; the imatrix works and i'm feeling very irritable now. do people not test these things? I know git-lfs hasn't been subject to any QA ever so maybe? the dataset file was made by concatenating most of the [default exllamav2 calibration data](https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data). a 900kb file of coherent text only, with some formatting and code but no endless broken html tags or nonsense. includes multilingual, for those deep layers. like this: ``` $ cd exllamav2/conversion/standard_cal_data $ cat technical.utf8 multilingual.utf8 code.utf8 tiny.utf8 > techmulcodetiny.utf8 ``` reference to: [exllamav2/conversion/standard_cal_data](https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data) and [techmulcodetiny.utf8](./techmulcodetiny.utf8) produce a file that is used by imatrix for 560~ "chunks" imatrix was run with default sampling settings besides the dataset (i think? i increased the batch number and reduced the batch size so i could cram on more layers but the generation should have been the same in the end) (someone tell me why I was wrong to run imatrix with -cb continuous batching. shame me.) (**UPDATE** found the command I used. use at your peril and obviously fix the paths) ``` imatrix -m Kyllene-57B-v1.0.q6_K.gguf -f ~/exltabbytorcher220/exllamav2/conversion/standard_cal_data/techmulcodetiny.utf8 -o Kyllene-57B-v1.0.q6_K.gguf.imat --verbosity 1 -ngl 50 -cb -t 3 -b 256 --no_mmap ``` 51 layers was too many on a 3090 and I had to kill wayland (pro tip: tmux). needless to say you'll probably die if you tried something idiotic like using this on windows --no_mmap was appropriate on my nigtmare vortex of 32GB DDR4, layered swap,tiny zrams and weird kernel parameters but maybe just omit it. how-to because i'm grouchy but I did actually want people to have these. Remember to replace IQ2_M (appears only twice, near the end) with whatever you fancy. Q2_K might be more compatible. ``` ~]$ git clone https://github.com/ggerganov/llama.cpp ~]$ cd llama.cpp if you're like me and you break llamas for fun and don't understand cmake: git switch master && git pull; git restore Makefile otherwise llama.cpp]$ git pull; make -j llama.cpp]$ ./quantize --allow-requantize --imatrix Kyllene-57B-v1.0.q6_K.gguf.imatrix INPUT_DIRECTORY/Kyllene-57B-v1.0.q6_K.gguf Kyllene-57B-v1.0.IQ2_M.gguf IQ2_M ``` if your computer has less than 8 cores, add the number of cores to the end of this (there's an invisible 8 by default). and yes, you can just use ./ (llama.cpp) as INPUT_DIRECTORY # Downloads (eat my ass huggingface yeah just leave the cryptic git lfs error message on the far side of a 3 hour upload over LTE thanks) no downloads now. ive uploaded 50 gigabytes so far and none of them made it past the great wall of git-lfs you have the imatrix and the q6, DIY. IQ2_M probably for a 24GB device, IQ3XXS for better with kv offload.
Weni/ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0-merged
Weni
2024-03-04T13:53:38Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T13:40:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]