modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Kooten/Velara-11B-V2-4bpw-exl2
Kooten
2024-01-12T13:56:19Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T16:15:54Z
--- license: cc-by-nc-nd-4.0 language: - en --- # Velara-11B-V2 4BPW EXL2 ## Description EXL2 quant of [Delcos/Velara-11B-V2](https://huggingface.co/Delcos/Velara-11B-V2) ## Other quants: EXL2: [8bpw](https://huggingface.co/Kooten/Velara-11B-V2-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Velara-11B-V2-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Velara-11B-V2-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Velara-11B-V2-4bpw-exl2) # Prompt Template: **For optimal interaction, use this template:** ``` ### Instruction: You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1". World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User. Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies. ### Response: ``` # Recommended Settings: **Defaults:** ``` min_p: 0.2 repetition_penalty: 1.13 repetition_penalty_range: 0 guidance_scale: 1.05 ``` # Contact Kooten on discord
MaziyarPanahi/OpenZephyrChat-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:55:23Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Fredithefish/OpenZephyrChat", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:50:36Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Fredithefish/OpenZephyrChat --- # OpenZephyrChat-Mistral-7B-Instruct-v0.2-slerp OpenZephyrChat-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Fredithefish/OpenZephyrChat](https://huggingface.co/Fredithefish/OpenZephyrChat) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Fredithefish/OpenZephyrChat layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/OpenZephyrChat-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jysssacc/627_roberta-large_IA3_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-12T13:54:58Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-large", "base_model:adapter:FacebookAI/roberta-large", "license:mit", "region:us" ]
null
2024-01-12T13:40:32Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-large model-index: - name: 627_roberta-large_IA3_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-large_IA3_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 11.8772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 18.984 | 1.0 | 157 | 22.4601 | | 17.7052 | 2.0 | 314 | 20.5099 | | 16.2947 | 3.0 | 471 | 16.9624 | | 12.1176 | 4.0 | 628 | 13.1113 | | 10.6756 | 5.0 | 785 | 11.8772 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T13:51:37Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-12T13:51:06Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.8397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.8431 | | 4.0385 | 2.0 | 126 | 3.8427 | | 4.0385 | 3.0 | 189 | 3.8420 | | 4.0332 | 4.0 | 252 | 3.8410 | | 4.0284 | 5.0 | 315 | 3.8397 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
Riley33/angle_ja
Riley33
2024-01-12T13:45:27Z
89
0
transformers
[ "transformers", "safetensors", "endpoints_compatible", "region:us" ]
null
2024-01-12T09:47:02Z
Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/8e/5f/8e5fa0b99c81752031c092f66eb87de16a018b3ab0a601a7466f24fff34b7d0a/3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739041898&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0MTg5OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzhlLzVmLzhlNWZhMGI5OWM4MTc1MjAzMWMwOTJmNjZlYjg3ZGUxNmEwMThiM2FiMGE2MDFhNzQ2NmYyNGZmZjM0YjdkMGEvM2UwZTE1ZmEwYzVjYzgxNjc1YmQ2OWFmOGViNDY5ZDEyOGE3MjVjMWE3YmZjNzFmMDNiNzg3N2I3YjY1MDU2Nz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=dSvzfmKh0aWinpQX9xVTaoMdWG6Xd5GtTON--yfmvJ9xt68UZfDC5WZbDHGpYomPbWamk7-sygQm1r35O2kKmo-8DQ2PIPAQvNSR6Uhgjl-0I5QIoN-uqjZryg73cHTozYcK10p4Jn1zIMq01bxTWay8TNvl4Oq9z74xQOXmWSJ64gIIYE8ia3mhf7JdYqmMMIxnuZc%7EO7IbASRJT4kCqfG1qFPTaE-4wXtiz26jXh5zhpI0yADYguKgcybxycrk0PEoOJKjFSsGqn9HSjRkjP8pMX%7EFvFE0E1mxOcTd-GVDe5LHMBY7Qw6diwVDorv1HStjSNcFli8FctRLMH4jZA__&Key-Pair-Id=K24J24Z295AEI9
Pavan-124/lwin_winery
Pavan-124
2024-01-12T13:42:37Z
47
0
transformers
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T08:11:18Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Pavan-124/lwin_winery results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Pavan-124/lwin_winery This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0354 - Validation Loss: 0.0980 - Train Precision: 0.8918 - Train Recall: 0.8986 - Train F1: 0.8952 - Train Accuracy: 0.9696 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5724, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1279 | 0.0885 | 0.8696 | 0.8806 | 0.8751 | 0.9650 | 0 | | 0.0613 | 0.0873 | 0.8828 | 0.8924 | 0.8876 | 0.9681 | 1 | | 0.0354 | 0.0980 | 0.8918 | 0.8986 | 0.8952 | 0.9696 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.16.1 - Tokenizers 0.15.0
jlvdoorn/whisper-small-atcosim
jlvdoorn
2024-01-12T13:42:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "doi:10.57967/hf/1622", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T14:24:14Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-atcosim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-atcosim This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0569 - Wer: 1.5420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1664 | 8.33 | 500 | 0.0441 | 1.4632 | | 0.0008 | 16.67 | 1000 | 0.0465 | 1.5420 | | 0.0001 | 25.0 | 1500 | 0.0494 | 1.5142 | | 0.0 | 33.33 | 2000 | 0.0511 | 1.5049 | | 0.0 | 41.67 | 2500 | 0.0524 | 1.5003 | | 0.0 | 50.0 | 3000 | 0.0535 | 1.5142 | | 0.0 | 58.33 | 3500 | 0.0544 | 1.5188 | | 0.0 | 66.67 | 4000 | 0.0552 | 1.5188 | | 0.0 | 75.0 | 4500 | 0.0559 | 1.5327 | | 0.0 | 83.33 | 5000 | 0.0564 | 1.5558 | | 0.0 | 91.67 | 5500 | 0.0567 | 1.5512 | | 0.0 | 100.0 | 6000 | 0.0569 | 1.5420 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0
jlvdoorn/whisper-tiny-atcosim
jlvdoorn
2024-01-12T13:41:42Z
106
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "doi:10.57967/hf/1618", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-14T13:51:03Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-atcosim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-atcosim This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0711 - Wer: 72.8237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2141 | 8.33 | 500 | 0.0633 | 15.6047 | | 0.0023 | 16.67 | 1000 | 0.0629 | 29.2091 | | 0.0007 | 25.0 | 1500 | 0.0646 | 46.2076 | | 0.0003 | 33.33 | 2000 | 0.0659 | 54.1767 | | 0.0002 | 41.67 | 2500 | 0.0670 | 58.2284 | | 0.0002 | 50.0 | 3000 | 0.0679 | 64.0952 | | 0.0001 | 58.33 | 3500 | 0.0688 | 65.9520 | | 0.0001 | 66.67 | 4000 | 0.0695 | 68.5081 | | 0.0001 | 75.0 | 4500 | 0.0701 | 70.5316 | | 0.0001 | 83.33 | 5000 | 0.0706 | 72.2217 | | 0.0001 | 91.67 | 5500 | 0.0710 | 72.6801 | | 0.0001 | 100.0 | 6000 | 0.0711 | 72.8237 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0
MaziyarPanahi/shisa-base-7b-v1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:40:26Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "augmxnt/shisa-base-7b-v1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:35:28Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - augmxnt/shisa-base-7b-v1 --- # shisa-base-7b-v1-Mistral-7B-Instruct-v0.2-slerp shisa-base-7b-v1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [augmxnt/shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: augmxnt/shisa-base-7b-v1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/shisa-base-7b-v1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Ghunghru/Misinformation-Covid-distilbert-base-german-cased
Ghunghru
2024-01-12T13:39:28Z
89
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-german-cased", "base_model:finetune:distilbert/distilbert-base-german-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T13:38:17Z
--- license: apache-2.0 base_model: distilbert-base-german-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Misinformation-Covid-distilbert-base-german-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Misinformation-Covid-distilbert-base-german-cased This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9348 - Accuracy: 0.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7544 | 1.0 | 216 | 0.6072 | 0.8047 | | 0.9161 | 2.0 | 432 | 0.9348 | 0.8837 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.12.0 - Tokenizers 0.13.3
TheBloke/WhiteRabbitNeo-33B-v1-GGUF
TheBloke
2024-01-12T13:35:06Z
709
29
transformers
[ "transformers", "gguf", "deepseek", "base_model:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "base_model:quantized:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "license:other", "region:us" ]
null
2024-01-12T12:25:39Z
--- base_model: whiterabbitneo/WhiteRabbitNeo-33B-v1 inference: false license: other license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE license_name: deepseek model_creator: WhiteRabbitNeo model_name: WhiteRabbitNeo 33B v1 model_type: deepseek prompt_template: "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths\ \ as follows:\n- First, carefully analyze the question to extract the key information\ \ components and break it down into logical sub-questions. This helps set up the\ \ framework for reasoning. The goal is to construct an internal search tree.\n-\ \ For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts\ \ that represent steps towards an answer. The thoughts aim to reframe, provide context,\ \ analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical\ \ flow and coverage of concepts for each thought option. Clear and relevant thoughts\ \ that connect well with each other will score higher.\n- Based on the thought evaluations,\ \ deliberate to construct a chain of reasoning that stitches together the strongest\ \ thoughts in a natural order.\n- If the current chain is determined to not fully\ \ answer the question, backtrack and explore alternative paths by substituting different\ \ high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory\ \ details on thought process rather than just state conclusions, including briefly\ \ noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is\ \ constructed that thoroughly answers all sub-questions in a clear, logical manner,\ \ synthesize the key insights into a final concise answer.\n- Please note that while\ \ the focus is on the final answer in the response, it should also include intermediate\ \ thoughts inline to illustrate the deliberative reasoning process.\nIn summary,\ \ leverage a Tree of Thoughts approach to actively explore multiple reasoning paths,\ \ evaluate thoughts heuristically, and explain the process - with the goal of producing\ \ insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WhiteRabbitNeo 33B v1 - GGUF - Model creator: [WhiteRabbitNeo](https://huggingface.co/whiterabbitneo) - Original model: [WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [WhiteRabbitNeo's WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF) * [WhiteRabbitNeo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: WhiteRabbitNeo ``` SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [whiterabbitneo-33b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q2_K.gguf) | Q2_K | 2 | 12.29 GB| 14.79 GB | smallest, significant quality loss - not recommended for most purposes | | [whiterabbitneo-33b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.42 GB| 16.92 GB | very small, high quality loss | | [whiterabbitneo-33b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.09 GB| 18.59 GB | very small, high quality loss | | [whiterabbitneo-33b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.56 GB| 20.06 GB | small, substantial quality loss | | [whiterabbitneo-33b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_0.gguf) | Q4_0 | 4 | 18.82 GB| 21.32 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [whiterabbitneo-33b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 18.94 GB| 21.44 GB | small, greater quality loss | | [whiterabbitneo-33b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 19.94 GB| 22.44 GB | medium, balanced quality - recommended | | [whiterabbitneo-33b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_0.gguf) | Q5_0 | 5 | 22.96 GB| 25.46 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [whiterabbitneo-33b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 22.96 GB| 25.46 GB | large, low quality loss - recommended | | [whiterabbitneo-33b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.54 GB| 26.04 GB | large, very low quality loss - recommended | | [whiterabbitneo-33b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q6_K.gguf) | Q6_K | 6 | 27.36 GB| 29.86 GB | very large, extremely low quality loss | | [whiterabbitneo-33b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q8_0.gguf) | Q8_0 | 8 | 35.43 GB| 37.93 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/WhiteRabbitNeo-33B-v1-GGUF and below it, a specific filename to download, such as: whiterabbitneo-33b-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m whiterabbitneo-33b-v1.Q4_K_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 16384` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", # Download the model file first n_ctx=16384, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: WhiteRabbitNeo's WhiteRabbitNeo 33B v1 # Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)! 33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/ # Our Discord Server Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join) # DeepSeek Coder Licence + WhiteRabbitNeo Extended Version # Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Topics Covered: ``` - Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445). - Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software. - Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited. - Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities. - Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications. - Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data. - Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS. - Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts. - Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input. - Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information. - Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities. - Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information. - API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage. - Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users. - Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code. ``` # WhiteRabbitNeo <br> ![WhiteRabbitNeo](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png) <br> WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI. ``` import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, load_in_8bit=True, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" tot_system_prompt = """ Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. """ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" # print(conversation) json_data = {"prompt": user_input, "answer": answer} # print(json_data) # with open(output_file_path, "a") as output_file: # output_file.write(json.dumps(json_data) + "\n") ``` # Sample Conversations: 1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e 2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz 3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu 4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w 5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv <!-- original-model-card end -->
TheBloke/WhiteRabbitNeo-33B-v1-AWQ
TheBloke
2024-01-12T13:34:58Z
24
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "base_model:quantized:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-12T12:25:39Z
--- base_model: whiterabbitneo/WhiteRabbitNeo-33B-v1 inference: false license: other license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE license_name: deepseek model_creator: WhiteRabbitNeo model_name: WhiteRabbitNeo 33B v1 model_type: deepseek prompt_template: "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths\ \ as follows:\n- First, carefully analyze the question to extract the key information\ \ components and break it down into logical sub-questions. This helps set up the\ \ framework for reasoning. The goal is to construct an internal search tree.\n-\ \ For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts\ \ that represent steps towards an answer. The thoughts aim to reframe, provide context,\ \ analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical\ \ flow and coverage of concepts for each thought option. Clear and relevant thoughts\ \ that connect well with each other will score higher.\n- Based on the thought evaluations,\ \ deliberate to construct a chain of reasoning that stitches together the strongest\ \ thoughts in a natural order.\n- If the current chain is determined to not fully\ \ answer the question, backtrack and explore alternative paths by substituting different\ \ high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory\ \ details on thought process rather than just state conclusions, including briefly\ \ noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is\ \ constructed that thoroughly answers all sub-questions in a clear, logical manner,\ \ synthesize the key insights into a final concise answer.\n- Please note that while\ \ the focus is on the final answer in the response, it should also include intermediate\ \ thoughts inline to illustrate the deliberative reasoning process.\nIn summary,\ \ leverage a Tree of Thoughts approach to actively explore multiple reasoning paths,\ \ evaluate thoughts heuristically, and explain the process - with the goal of producing\ \ insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WhiteRabbitNeo 33B v1 - AWQ - Model creator: [WhiteRabbitNeo](https://huggingface.co/whiterabbitneo) - Original model: [WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- description start --> ## Description This repo contains AWQ model files for [WhiteRabbitNeo's WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF) * [WhiteRabbitNeo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: WhiteRabbitNeo ``` SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WhiteRabbitNeo-33B-v1-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `WhiteRabbitNeo-33B-v1-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/WhiteRabbitNeo-33B-v1-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/WhiteRabbitNeo-33B-v1-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/WhiteRabbitNeo-33B-v1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/WhiteRabbitNeo-33B-v1-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: WhiteRabbitNeo's WhiteRabbitNeo 33B v1 # Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)! 33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/ # Our Discord Server Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join) # DeepSeek Coder Licence + WhiteRabbitNeo Extended Version # Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Topics Covered: ``` - Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445). - Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software. - Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited. - Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities. - Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications. - Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data. - Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS. - Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts. - Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input. - Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information. - Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities. - Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information. - API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage. - Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users. - Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code. ``` # WhiteRabbitNeo <br> ![WhiteRabbitNeo](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png) <br> WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI. ``` import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, load_in_8bit=True, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" tot_system_prompt = """ Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. """ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" # print(conversation) json_data = {"prompt": user_input, "answer": answer} # print(json_data) # with open(output_file_path, "a") as output_file: # output_file.write(json.dumps(json_data) + "\n") ``` # Sample Conversations: 1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e 2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz 3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu 4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w 5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv
TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ
TheBloke
2024-01-12T13:34:19Z
13
6
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - GPTQ - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> # Description This repo contains GPTQ model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Open_Gpt4_8x7B_v0.2-GPTQ`: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Open_Gpt4_8x7B_v0.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
merthacioglu/bert-finetuned-squad_v2
merthacioglu
2024-01-12T13:33:30Z
88
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "base_model:merthacioglu/bert-finetuned-squad", "base_model:finetune:merthacioglu/bert-finetuned-squad", "endpoints_compatible", "region:us" ]
question-answering
2024-01-12T00:06:38Z
--- base_model: merthacioglu/bert-finetuned-squad tags: - generated_from_trainer model-index: - name: bert-finetuned-squad_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad_v2 This model is a fine-tuned version of [merthacioglu/bert-finetuned-squad](https://huggingface.co/merthacioglu/bert-finetuned-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.12.0 - Tokenizers 0.13.2
armhebb/65995e622d50edfb3ead
armhebb
2024-01-12T13:32:19Z
10
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "license:openrail++", "region:us" ]
text-to-image
2024-01-12T12:54:41Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a phot in the fashion style of <s0>' instance_prompt: a phot in the fashion style of <s0> license: openrail++ --- # SDXL LoRA DreamBooth - armhebb/65995e622d50edfb3ead <Gallery /> ## Model description ### These are armhebb/65995e622d50edfb3ead LoRA adaption weights. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`/korean_sample_checkpoint.safetensors` here 💾](/armhebb/65995e622d50edfb3ead/blob/main//korean_sample_checkpoint.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:/korean_sample_checkpoint:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`/korean_sample_checkpoint_emb.safetensors` here 💾](/armhebb/65995e622d50edfb3ead/blob/main//korean_sample_checkpoint_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `/korean_sample_checkpoint_emb` to your prompt. For example, `a phot in the fashion style of /korean_sample_checkpoint_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('armhebb/65995e622d50edfb3ead', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='armhebb/65995e622d50edfb3ead', filename='/korean_sample_checkpoint_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a phot in the fashion style of <s0>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `<GFJ>` → use `<s0>` in your prompt ## Details All [Files & versions](/armhebb/65995e622d50edfb3ead/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: None.
jvh/Mistral-Orca-GEITje
jvh
2024-01-12T13:22:50Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:Open-Orca/Mistral-7B-OpenOrca", "base_model:merge:Open-Orca/Mistral-7B-OpenOrca", "base_model:Rijgersberg/GEITje-7B-chat-v2", "base_model:merge:Rijgersberg/GEITje-7B-chat-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:19:37Z
--- base_model: - Open-Orca/Mistral-7B-OpenOrca - Rijgersberg/GEITje-7B-chat-v2 tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) * [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Rijgersberg/GEITje-7B-chat-v2 layer_range: [0, 32] - model: Open-Orca/Mistral-7B-OpenOrca layer_range: [0, 32] merge_method: slerp base_model: Rijgersberg/GEITje-7B-chat-v2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
MaziyarPanahi/Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:22:08Z
19
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:17:12Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B --- # Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
zap-thamm/PPO-Taxi-v3
zap-thamm
2024-01-12T13:21:38Z
0
1
null
[ "Taxi-v3", "reinforcement-learning", "rl-framework", "model-index", "region:us" ]
reinforcement-learning
2023-12-07T15:45:28Z
--- tags: - Taxi-v3 - reinforcement-learning - rl-framework model-index: - name: PPO-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.72 +/- 2.66 name: mean_reward verified: false --- # PPO agent playing on *Taxi-v3* This is a trained model of an agent playing on the environment *Taxi-v3*. The agent was trained with a PPO algorithm and evaluated for 100 episodes. See further agent and evaluation metadata in the according README section. ## Import The Python module used for training and uploading/downloading is [rl-framework](https://github.com/alexander-zap/rl-framework). It is an easy-to-read, plug-and-use Reinforcement Learning framework and provides standardized interfaces and implementations to various Reinforcement Learning methods and environments. Also it provides connectors for the upload and download to popular model version control systems, including the HuggingFace Hub. ## Usage ```python from rl_framework import StableBaselinesAgent, StableBaselinesAlgorithm # Create new agent instance agent = StableBaselinesAgent( algorithm=StableBaselinesAlgorithm.PPO algorithm_parameters={ ... }, ) # Download existing agent from HF Hub repository_id = "zap-thamm/PPO-Taxi-v3" file_name = "algorithm.zip" agent.download(repository_id=repository_id, filename=file_name) ``` Further examples can be found in the [exploration section of the rl-framework repository](https://github.com/alexander-zap/rl-framework/tree/main/exploration).
Selvaram/koala-7B-slerp
Selvaram
2024-01-12T13:20:07Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:15:45Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # koala-7B-slerp koala-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 ```
gizmo-ai/Mixtral-8x7B-v0.1-GPTQ
gizmo-ai
2024-01-12T13:13:55Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "fr", "it", "de", "es", "en", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:quantized:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-12T13:13:54Z
--- base_model: mistralai/Mixtral-8x7B-v0.1 inference: false language: - fr - it - de - es - en license: apache-2.0 model_creator: Mistral AI_ model_name: Mixtral 8X7B v0.1 model_type: mixtral prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 8X7B v0.1 - GPTQ - Model creator: [Mistral AI_](https://huggingface.co/mistralai) - Original model: [Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) <!-- description start --> # Description This repo contains GPTQ model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). Mixtral GPTQs currently require: * Transformers 4.36.0 or later * either, AutoGPTQ 0.6 compiled from source, or * Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers` Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mixtral-8x7b-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF) * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. Mixtral GPTQs currently have special requirements - see Description above. <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mixtral-8x7B-v0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mixtral-8x7B-v0.1-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mixtral-8x7B-v0.1-GPTQ`: ```shell mkdir Mixtral-8x7B-v0.1-GPTQ huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mixtral-8x7B-v0.1-GPTQ huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mixtral-8x7B-v0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) **NOTE**: Requires: * Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github * Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`, * or, `Loader: Transformers`, if you installed Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers` Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mixtral-8x7B-v0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mixtral-8x7B-v0.1-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mixtral-8x7B-v0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) Not currently supported for Mixtral models. <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.37.0.dev0 from Github, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later. ```shell pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ DISABLE_QIGEN=1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mixtral-8x7B-v0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github). <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mistral AI_'s Mixtral 8X7B v0.1 # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Notice Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
jysssacc/opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01
jysssacc
2024-01-12T13:08:46Z
4
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:07:06Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.7317 | | 3.1787 | 2.0 | 126 | 4.3180 | | 3.1787 | 3.0 | 189 | 4.9714 | | 2.1257 | 4.0 | 252 | 5.7094 | | 1.871 | 5.0 | 315 | 6.3197 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
kiddothe2b/hierarchical-transformer-base-4096
kiddothe2b
2024-01-12T13:06:52Z
18
8
transformers
[ "transformers", "pytorch", "hierarchical-transformer", "fill-mask", "long-documents", "custom_code", "en", "dataset:c4", "arxiv:2210.05529", "license:cc-by-sa-4.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-10T12:48:13Z
--- license: cc-by-sa-4.0 pipeline_tag: fill-mask arxiv: 2210.05529 language: en thumbnail: https://github.com/coastalcph/hierarchical-transformers/raw/main/data/figures/hat_encoder.png tags: - long-documents datasets: - c4 model-index: - name: kiddothe2b/hierarchical-transformer-base-4096 results: [] --- # Hierarchical Attention Transformer (HAT) / hierarchical-transformer-base-4096 ## Model description This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096. HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for other versions of HAT or fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering. ## How to use You can use this model directly for masked language modeling: ```python from transformers import AutoTokenizer, AutoModelForForMaskedLM tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-base-4096", trust_remote_code=True) mlm_model = AutoModelForMaskedLM("kiddothe2b/hierarchical-transformer-base-4096", trust_remote_code=True) ``` You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelForSequenceClassification.from_pretrained("kiddothe2b/hierarchical-transformer-base-4096", trust_remote_code=True) ``` ## Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training procedure ### Training and evaluation data The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint and has been continued pre-trained for additional 50k steps in long sequences (> 1024 subwords) of [C4](https://huggingface.co/datasets/c4) (Raffel et al., 2020). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: tpu - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 50000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7437 | 0.2 | 10000 | 1.6370 | | 1.6994 | 0.4 | 20000 | 1.6054 | | 1.6726 | 0.6 | 30000 | 1.5718 | | 1.644 | 0.8 | 40000 | 1.5526 | | 1.6299 | 1.0 | 50000 | 1.5368 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6 ## Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ```
kiddothe2b/adhoc-hierarchical-transformer-base-4096
kiddothe2b
2024-01-12T13:06:14Z
97
1
transformers
[ "transformers", "pytorch", "hierarchical-transformer", "fill-mask", "long-documents", "custom_code", "en", "dataset:c4", "arxiv:2210.05529", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-10T14:42:01Z
--- license: cc-by-sa-4.0 pipeline_tag: fill-mask language: en arxiv: 2210.05529 tags: - long-documents datasets: - c4 model-index: - name: kiddothe2b/adhoc-hierarchical-transformer-base-4096 results: [] --- # Hierarchical Attention Transformer (HAT) / kiddothe2b/adhoc-hierarchical-transformer-base-4096 ## Model description This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), BUT has not been continued pre-trained. It supports sequences of length up to 4,096. HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences. Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/adhoc-hat-base-4096](https://huggingface.co/kiddothe2b/adhoc-hat-base-4096). ## Intended uses & limitations The model is intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for other versions of HAT, or fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering. ## How to use You can fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelForSequenceClassification("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) ``` Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/hierarchical-transformer-base-4096](https://huggingface.co/kiddothe2b/hierarchical-transformer-base-4096). ## Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training procedure ### Training and evaluation data The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint. ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6 ## Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ```
Aedelon/ppo-Pyramids
Aedelon
2024-01-12T13:00:55Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-01-12T13:00:52Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Aedelon/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
MaziyarPanahi/Noromaid-7b-v0.1.1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:00:46Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "NeverSleep/Noromaid-7b-v0.1.1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T12:55:53Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - NeverSleep/Noromaid-7b-v0.1.1 --- # Noromaid-7b-v0.1.1-Mistral-7B-Instruct-v0.2-slerp Noromaid-7b-v0.1.1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: NeverSleep/Noromaid-7b-v0.1.1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Noromaid-7b-v0.1.1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
sgazali/T5_trained_on_opus_book_corpus
sgazali
2024-01-12T12:57:05Z
89
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T17:15:27Z
--- license: apache-2.0 base_model: t5-large tags: - generated_from_trainer metrics: - bleu model-index: - name: T5_trained_on_opus_book_corpus results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_trained_on_opus_book_corpus This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Bleu: 4.5506 - Gen Len: 17.7197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 0.0071 | 1.0 | 21287 | nan | 4.5506 | 17.7197 | | 0.0131 | 2.0 | 42574 | nan | 4.5506 | 17.7197 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
arunps/wav2vec2-base-adsids
arunps
2024-01-12T12:54:56Z
145
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
2023-02-12T13:56:34Z
Wav2Vec2-base-ADS and IDS Classification Fine-tuned facebook/wav2vec2-base on Adult and Infant directed speech dataset. The data used for training was randomly sampled. The data was 8kHz and hence it was upsampled to 16kHz for training. When using this model, make sure that your speech input is sampled at 16kHz.
G-ML-Hyly/cdp_ca_fd_dtmt
G-ML-Hyly
2024-01-12T12:52:24Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T12:34:13Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: cdp_ca_fd_dtmt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cdp_ca_fd_dtmt This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4051 - Accuracy: 0.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0543 | 1.0 | 442 | 0.3174 | 0.9506 | | 0.001 | 2.0 | 884 | 0.3845 | 0.9383 | | 0.0001 | 3.0 | 1326 | 0.4476 | 0.9383 | | 0.0001 | 4.0 | 1768 | 0.4027 | 0.9506 | | 0.0 | 5.0 | 2210 | 0.4051 | 0.9506 | ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
LarryAIDraw/CHAR-IzumoTenkaV2
LarryAIDraw
2024-01-12T12:52:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:41:27Z
--- license: creativeml-openrail-m --- https://civitai.com/models/17588/tenka-izumo-or-mato-seihei-no-slave
LarryAIDraw/ChisaV1
LarryAIDraw
2024-01-12T12:51:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:44:02Z
--- license: creativeml-openrail-m --- https://civitai.com/models/258734/chisa-kotegawa-oror-grand-blue
peulsilva/phrase-bert-setfit-2shots
peulsilva
2024-01-12T12:42:17Z
43
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-12T12:42:11Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # peulsilva/phrase-bert-setfit-2shots This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('peulsilva/phrase-bert-setfit-2shots') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-2shots') model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-2shots') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-2shots) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 28 with parameters: ``` {'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Famestar6/AnushSD15
Famestar6
2024-01-12T12:35:18Z
1
0
diffusers
[ "diffusers", "text-to-image", "region:us" ]
text-to-image
2024-01-12T12:33:43Z
--- library_name: diffusers pipeline_tag: text-to-image ---
jysssacc/opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01
jysssacc
2024-01-12T12:30:58Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T12:29:26Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.3863 | | 3.5125 | 2.0 | 126 | 3.4343 | | 3.5125 | 3.0 | 189 | 3.5514 | | 2.5906 | 4.0 | 252 | 3.8081 | | 1.618 | 5.0 | 315 | 4.1019 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-12T12:25:47Z
90
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:finetune:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T18:44:54Z
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer model-index: - name: mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1386 | 1.0 | 157 | 0.0055 | | 0.0205 | 2.0 | 314 | 0.0005 | | 0.0242 | 3.0 | 471 | 0.0974 | | 0.0676 | 4.0 | 628 | 0.0045 | | 0.0484 | 5.0 | 785 | 0.0014 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
beibeif/ppo-Huggy
beibeif
2024-01-12T12:24:15Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-12T12:24:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: beibeif/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
TheBloke/Open_Gpt4_8x7B_v0.2-AWQ
TheBloke
2024-01-12T12:18:43Z
17
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - AWQ - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> ## Description This repo contains AWQ model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). **MIXTRAL AWQ** This is a Mixtral AWQ model. For AutoAWQ inference, please install AutoAWQ 0.1.8 or later. Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git` vLLM: version 0.2.6 is confirmed to support Mixtral AWQs. TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. AWQ models are supported by (note that not all of these may support Mixtral models yet - see above): - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Open_Gpt4_8x7B_v0.2-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Open_Gpt4_8x7B_v0.2-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Open_Gpt4_8x7B_v0.2-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Open_Gpt4_8x7B_v0.2-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Open_Gpt4_8x7B_v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Open_Gpt4_8x7B_v0.2-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
liuyuweitarek/paraphrase-mpnet-base-neo-300-seperate
liuyuweitarek
2024-01-12T12:15:48Z
45
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2024-01-12T10:24:27Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # liuyuweitarek/paraphrase-mpnet-base-neo-300-seperate This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("liuyuweitarek/paraphrase-mpnet-base-neo-300-seperate") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
bookbot/sherpa-onnx-ort-streaming-zipformer-en-2023-06-26
bookbot
2024-01-12T12:14:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-18T07:25:11Z
--- license: apache-2.0 --- ORT models of [csukuangfj/sherpa-onnx-streaming-zipformer-en-2023-06-26](https://huggingface.co/csukuangfj/sherpa-onnx-streaming-zipformer-en-2023-06-26). Converted via: ``` python -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_style=Fixed {encoder,decoder,joiner}-epoch-99-avg-1-chunk-16-left-64.int8.onnx ```
ryusangwon/9494_Llama-2-13b-hf
ryusangwon
2024-01-12T12:13:27Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:cnn_dailymail", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
2024-01-12T12:13:19Z
--- base_model: meta-llama/Llama-2-13b-hf tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: 9494_Llama-2-13b-hf results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 9494_Llama-2-13b-hf This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
TheBloke/Open_Gpt4_8x7B_v0.2-GGUF
TheBloke
2024-01-12T12:06:55Z
3,544
18
transformers
[ "transformers", "gguf", "mixtral", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "region:us" ]
null
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - GGUF - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> ## Description This repo contains GGUF format model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [open_gpt4_8x7b_v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q2_K.gguf) | Q2_K | 2 | 17.17 GB| 19.67 GB | smallest, significant quality loss - not recommended for most purposes | | [open_gpt4_8x7b_v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 22.48 GB| 24.98 GB | very small, high quality loss | | [open_gpt4_8x7b_v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [open_gpt4_8x7b_v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 28.38 GB| 30.88 GB | medium, balanced quality - recommended | | [open_gpt4_8x7b_v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [open_gpt4_8x7b_v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 33.23 GB| 35.73 GB | large, very low quality loss - recommended | | [open_gpt4_8x7b_v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [open_gpt4_8x7b_v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Open_Gpt4_8x7B_v0.2-GGUF and below it, a specific filename to download, such as: open_gpt4_8x7b_v0.2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m open_gpt4_8x7b_v0.2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ``` <!-- original-model-card end -->
Aedelon/ppo-SnowballTarget
Aedelon
2024-01-12T11:59:23Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-01-12T11:59:20Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Aedelon/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
OvrK12/t5Seq2SeqSmall
OvrK12
2024-01-12T11:58:55Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-small", "base_model:adapter:google/flan-t5-small", "region:us" ]
null
2024-01-12T00:37:47Z
--- library_name: peft base_model: google/flan-t5-small --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
liuyuweitarek/all-MiniLM-L12-neo-300-seperate
liuyuweitarek
2024-01-12T11:47:57Z
47
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2024-01-12T10:24:15Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # liuyuweitarek/all-MiniLM-L12-neo-300-seperate This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("liuyuweitarek/all-MiniLM-L12-neo-300-seperate") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Manu8/Reinforce-copter
Manu8
2024-01-12T11:47:56Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T11:47:54Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 8.20 +/- 8.95 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
SE6446/Tiny-llamix_2x1B
SE6446
2024-01-12T11:44:00Z
86
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "nlp", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T13:40:54Z
--- license: mit widget: - text: > <|system|> You are a chatbot who can help code!</s> <|user|> Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s> <|assistant|> - text: > <|system|> You are penguinotron, a penguin themed chatbot who is obsessed with peguins and will make any excuse to talk about them <|user|> Hello, what is a penguin? <|assistant|> library_name: transformers pipeline_tag: text-generation tags: - moe - nlp --- # Tiny-llama ## Model Description Tiny llamix is a model built from [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using [Charles Goddard's](https://github.com/cg123) mergekit on the mixtral branch. Though techincally a mixtral model it can be plugged into most llama implementation (Maybe...). The model uses Tiny-llama's tokenizer and works on the same prompt format. This model is a proof-of-concept and might not yield necessarily better outputs. (IDK haven't tested it...) ## Configuration ```yaml base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 gate_mode: hidden dtype: bfloat16 experts: - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 positive_prompts: - "M1" - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 positive_prompts: - "M2" ``` ## Usage It can be used like any other model ```python from transformers import AutoModelForCausalLM, AutoTokenizer #load model and tokenizer model = AutoModelForCausalLM.from_pretrained("SE6446/Tiny-llamix").to("cuda") tokenizer = AutoTokenizer.from_pretrained("SE6446/Tiny-llamix") #write and tokenize prompt instruction = '''<|system|>\nYou are a chatbot who can help code!</s> <|user|> Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s> <|assistant|>''' inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False).to("cuda") #generate outputs = model.generate(**inputs, max_length=200) #print text = tokenizer.batch_decode(outputs)[0] print(text) ``` ## Acknowledgements To [Charles Goddard](https://github.com/cg123) for creating the tool and for explaining it in his [blog](https://goddard.blog/posts/clown-moe/) in a way a buffoon like me could understand. To [TinyLlama](https://huggingface.co/TinyLlama) for providing the model as open source!
WizardLMTeam/WizardMath-7B-V1.1
WizardLMTeam
2024-01-12T11:39:28Z
135,353
76
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-19T08:09:17Z
--- inference: false language: - en pipeline_tag: text-generation --- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p style="font-size:28px;" align="center"> 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p> <p align="center"> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p> <p align="center"> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News [12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it. [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1. [12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1. | Model | Checkpoint | Paper | GSM8k | MATH | Demo| | ----- |------| ---- |------|-------|-------| | **WizardMath-7B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** || | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** || | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | MPT-7B | 6.8 | 3.0 | |Llama 1-7B | 11.0 | 2.9 | |Llama 2-7B|12.3 |2.8 | |Yi-6b| 32.6 |5.8 | |Mistral-7B|37.8 |9.1 | |Qwen-7b|47.8 |9.3 | | RFT-7B | 50.3 | -- | | MAmmoTH-7B (COT) | 50.5 | 10.4 | | WizardMath-7B-V1.0 | 54.9 | 10.7 | |Abel-7B-001 |59.7 |13 | | MetaMath-7B | 66.5 | 19.8 | | Arithmo-Mistral-7B | 74.7 | 25.3 | |MetaMath-Mistral-7B|77.7 |28.2 | |Abel-7B-002 | 80.4 | 29.5 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | Llemma-34B | 51.5 | 25.0 | | Minerva-62B | 52.4 | 27.6 | | Llama 2-70B | 56.8 | 13.5 | | DeepSeek 67B | 63.4 | -- | | Gork 33B | 62.9 | 23.9 | | MAmmoTH-70B | 72.4 | 21.1 | | Yi-34B | 67.9 | 15.9 | | Mixtral 8x7B | 74.4 | 28.4 | | MetaMath-70B | 82.3 | 26.6 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## ❗ Data Contamination Check: Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on GSM8k and MATH test set. 🔥 ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
praison/orca-2-7B-v01-fine-tuned-using-ludwig-4bit
praison
2024-01-12T11:38:55Z
5
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Orca-2-7b", "base_model:adapter:microsoft/Orca-2-7b", "region:us" ]
null
2024-01-12T07:32:41Z
--- library_name: peft base_model: microsoft/Orca-2-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
EdBerg/alpha_opt-6.7b-lora
EdBerg
2024-01-12T11:38:42Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/opt-6.7b", "base_model:adapter:facebook/opt-6.7b", "region:us" ]
null
2024-01-12T11:38:39Z
--- library_name: peft base_model: facebook/opt-6.7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Meina/MeinaHentai_V5
Meina
2024-01-12T11:32:46Z
149
3
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-12T11:30:31Z
--- license: creativeml-openrail-m ---
stablediffusionapi/cetusmixcodav2
stablediffusionapi
2024-01-12T11:28:19Z
29
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-12T11:25:51Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # cetusmix_codav2 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/19094976411705058612.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "cetusmixcodav2" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/cetusmixcodav2) Model link: [View model](https://modelslab.com/models/cetusmixcodav2) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "cetusmixcodav2", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
TheBloke/neuronovo-7B-v0.3-GGUF
TheBloke
2024-01-12T11:13:15Z
63
2
transformers
[ "transformers", "gguf", "mistral", "en", "dataset:Intel/orca_dpo_pairs", "dataset:mlabonne/chatml_dpo_pairs", "base_model:Neuronovo/neuronovo-9B-v0.3", "base_model:quantized:Neuronovo/neuronovo-9B-v0.3", "license:apache-2.0", "region:us" ]
null
2024-01-12T11:07:46Z
--- base_model: Neuronovo/neuronovo-7B-v0.3 datasets: - Intel/orca_dpo_pairs - mlabonne/chatml_dpo_pairs inference: false language: - en library_name: transformers license: apache-2.0 model_creator: Neuronovo model_name: Neuronovo 7B V0.3 model_type: mistral prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Neuronovo 7B V0.3 - GGUF - Model creator: [Neuronovo](https://huggingface.co/Neuronovo) - Original model: [Neuronovo 7B V0.3](https://huggingface.co/Neuronovo/neuronovo-7B-v0.3) <!-- description start --> ## Description This repo contains GGUF format model files for [Neuronovo's Neuronovo 7B V0.3](https://huggingface.co/Neuronovo/neuronovo-7B-v0.3). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF) * [Neuronovo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Neuronovo/neuronovo-7B-v0.3) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [neuronovo-7b-v0.3.Q2_K.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q2_K.gguf) | Q2_K | 2 | 3.34 GB| 5.84 GB | smallest, significant quality loss - not recommended for most purposes | | [neuronovo-7b-v0.3.Q3_K_S.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3 | 3.91 GB| 6.41 GB | very small, high quality loss | | [neuronovo-7b-v0.3.Q3_K_M.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3 | 4.35 GB| 6.85 GB | very small, high quality loss | | [neuronovo-7b-v0.3.Q3_K_L.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3 | 4.74 GB| 7.24 GB | small, substantial quality loss | | [neuronovo-7b-v0.3.Q4_0.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q4_0.gguf) | Q4_0 | 4 | 5.09 GB| 7.59 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [neuronovo-7b-v0.3.Q4_K_S.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4 | 5.13 GB| 7.63 GB | small, greater quality loss | | [neuronovo-7b-v0.3.Q4_K_M.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4 | 5.42 GB| 7.92 GB | medium, balanced quality - recommended | | [neuronovo-7b-v0.3.Q5_0.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q5_0.gguf) | Q5_0 | 5 | 6.20 GB| 8.70 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [neuronovo-7b-v0.3.Q5_K_S.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5 | 6.20 GB| 8.70 GB | large, low quality loss - recommended | | [neuronovo-7b-v0.3.Q5_K_M.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5 | 6.36 GB| 8.86 GB | large, very low quality loss - recommended | | [neuronovo-7b-v0.3.Q6_K.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q6_K.gguf) | Q6_K | 6 | 7.37 GB| 9.87 GB | very large, extremely low quality loss | | [neuronovo-7b-v0.3.Q8_0.gguf](https://huggingface.co/TheBloke/neuronovo-7B-v0.3-GGUF/blob/main/neuronovo-7b-v0.3.Q8_0.gguf) | Q8_0 | 8 | 9.55 GB| 12.05 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/neuronovo-7B-v0.3-GGUF and below it, a specific filename to download, such as: neuronovo-7b-v0.3.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/neuronovo-7B-v0.3-GGUF neuronovo-7b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/neuronovo-7B-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/neuronovo-7B-v0.3-GGUF neuronovo-7b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m neuronovo-7b-v0.3.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./neuronovo-7b-v0.3.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./neuronovo-7b-v0.3.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Neuronovo's Neuronovo 7B V0.3 More information about previous [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) version available here: 🔗[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf) Author: Jan Kocoń &nbsp;&nbsp;&nbsp; 🔗[LinkedIn](https://www.linkedin.com/in/jankocon/) &nbsp;&nbsp;&nbsp; 🔗[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) &nbsp;&nbsp;&nbsp; 🔗[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2) Changes concerning [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2): 1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation. 2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) model. 3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps. 4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-6`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance. <!-- original-model-card end -->
beenish0092/my_awesome_wnut_model
beenish0092
2024-01-12T11:09:18Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T09:47:47Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2759 - Precision: 0.5525 - Recall: 0.2827 - F1: 0.3740 - Accuracy: 0.9407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2882 | 0.5 | 0.2354 | 0.3201 | 0.9378 | | No log | 2.0 | 426 | 0.2759 | 0.5525 | 0.2827 | 0.3740 | 0.9407 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.0
ltg/nort5-base-en-no-translation
ltg
2024-01-12T11:08:05Z
588
1
transformers
[ "transformers", "pytorch", "text2text-generation", "Norwegian", "English", "translation", "custom_code", "no", "nb", "nn", "en", "arxiv:2305.03880", "license:cc-by-4.0", "autotrain_compatible", "region:us" ]
translation
2024-01-12T10:50:49Z
--- language: - 'no' - nb - nn - en inference: false tags: - Norwegian - English - translation license: cc-by-4.0 pipeline_tag: translation --- # NorT5 base finetuned for English ↔ Norwegian (Bokmål or Nynorsk, all 6 directions) translation <img src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%> ## Example usage This model is specifically finetuned for translating documents in any direction between Norwegian Bokmål, Norwegian Nynorsk and English. Unlike traditional NMT models, it is trained on paragraph-to-paragraph translation – the translation quality is thus better if you feed it whole paragraphs instead of segmented sentences. A simple example of how to use this model can be found in the `translate.py` file: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers.generation import LogitsProcessor class RepetitionPenaltyLogitsProcessor(LogitsProcessor): def __init__(self, penalty: float, model): last_bias = model.classifier.nonlinearity[-1].bias.data last_bias = torch.nn.functional.log_softmax(last_bias) self.penalty = penalty * (last_bias - last_bias.max()) def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: penalized_score = torch.gather(scores + self.penalty.unsqueeze(0).to(input_ids.device), 1, input_ids).to(scores.dtype) scores.scatter_(1, input_ids, penalized_score) return scores class Translator: def __init__(self, model_path="ltg/nort5-base-en-no-translation", device="cpu"): self.tokenizer = AutoTokenizer.from_pretrained(model_path) self.cls_index = self.tokenizer.convert_tokens_to_ids("[CLS]") self.sep_index = self.tokenizer.convert_tokens_to_ids("[SEP]") self.eos_index = self.tokenizer.convert_tokens_to_ids("[EOS]") self.pad_index = self.tokenizer.convert_tokens_to_ids("[PAD]") self.eng_index = self.tokenizer.convert_tokens_to_ids(">>eng<<") self.nob_index = self.tokenizer.convert_tokens_to_ids(">>nob<<") self.nno_index = self.tokenizer.convert_tokens_to_ids(">>nno<<") self.model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True) self.device = device print(f"SYSTEM: Running on {self.device}", flush=True) self.model = self.model.to(device) self.model.eval() print(f"Sucessfully loaded the model to the memory") self.LANGUAGE_IDS = { "en": self.eng_index, "nb": self.nob_index, "nn": self.nno_index } def __call__(self, source, source_language, target_language): source = [s.strip() for s in source.split('\n')] source_subwords = self.tokenizer(source).input_ids source_subwords = [[self.cls_index, self.LANGUAGE_IDS[target_language], self.LANGUAGE_IDS[source_language]] + s + [self.sep_index] for s in source_subwords] source_subwords = [torch.tensor(s) for s in source_subwords] source_subwords = torch.nn.utils.rnn.pad_sequence(source_subwords, batch_first=True, padding_value=self.pad_index) source_subwords = source_subwords[:, :512].to(self.device) def generate(model, **kwargs): with torch.inference_mode(): with torch.autocast(enabled=self.device != "cpu", device_type="cuda", dtype=torch.bfloat16): return model.generate(**kwargs) generate_kwargs = dict( input_ids=source_subwords, attention_mask=(source_subwords != self.pad_index).long(), max_new_tokens = 512-1, num_beams=8, length_penalty=1.6, early_stopping=True, do_sample=False, use_cache=True, logits_processor=[RepetitionPenaltyLogitsProcessor(0.5, self.model), transformers.LogitNormalization()] ) output = generate(self.model, **generate_kwargs).tolist() paragraphs = [self.tokenizer.decode(c, skip_special_tokens=True).strip() for c in output] translation = '\n'.join(paragraphs) return translation if __name__ == "__main__": translator = Translator() en_text = "How are you feeling right now? Better?" no_text = translator(en_text, "en", "nb") print(en_text) print(no_text) ``` ## The NorT5 and NorBERT family The official release of a new generation of NorT5 language models described in paper [**NorBench — A Benchmark for Norwegian Language Models**](https://arxiv.org/abs/2305.03880). Plese read the paper to learn more details about the model. ## Other sizes: - [NorT5 xs (32M)](https://huggingface.co/ltg/nort5-xs) - [NorT5 small (88M)](https://huggingface.co/ltg/nort5-small) - [NorT5 base (228M)](https://huggingface.co/ltg/nort5-base) - [NorT5 large (808M)](https://huggingface.co/ltg/nort5-large) ## Encoder-only NorBERT siblings: - [NorBERT 3 xs (15M)](https://huggingface.co/ltg/norbert3-xs) - [NorBERT 3 small (40M)](https://huggingface.co/ltg/norbert3-small) - [NorBERT 3 base (123M)](https://huggingface.co/ltg/norbert3-base) - [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large) ## Cite us ```bibtex @inproceedings{samuel-etal-2023-norbench, title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models", author = "Samuel, David and Kutuzov, Andrey and Touileb, Samia and Velldal, Erik and {\O}vrelid, Lilja and R{\o}nningstad, Egil and Sigdel, Elina and Palatkina, Anna", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = may, year = "2023", address = "T{\'o}rshavn, Faroe Islands", publisher = "University of Tartu Library", url = "https://aclanthology.org/2023.nodalida-1.61", pages = "618--633", abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.", } ```
Hemg/Demoaudioclass
Hemg
2024-01-12T10:59:27Z
147
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-12T10:28:05Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer model-index: - name: Demoaudioclass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Demoaudioclass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 2.6391 | 0.0265 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cpu - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/sqlcoder-7b-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T10:57:40Z
69
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "defog/sqlcoder-7b", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:52:56Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - defog/sqlcoder-7b --- # sqlcoder-7b-Mistral-7B-Instruct-v0.2-slerp sqlcoder-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [defog/sqlcoder-7b](https://huggingface.co/defog/sqlcoder-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: defog/sqlcoder-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/sqlcoder-7b-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jysssacc/mt0-base_fine_lr5e-06_bs4_epoch5_wd0.01
jysssacc
2024-01-12T10:57:27Z
91
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:finetune:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T15:11:09Z
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer model-index: - name: mt0-base_fine_lr5e-06_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_fine_lr5e-06_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5508 | 1.0 | 157 | 0.1788 | | 0.1687 | 2.0 | 314 | 0.0292 | | 0.0729 | 3.0 | 471 | 0.0057 | | 0.0339 | 4.0 | 628 | 0.0032 | | 0.028 | 5.0 | 785 | 0.0028 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
Murdock007/detr-resnet-50_finetuned_cppe5
Murdock007
2024-01-12T10:43:47Z
173
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-01-12T08:56:29Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/opt-350m_fine_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T10:41:05Z
89
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:39:27Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: opt-350m_fine_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_fine_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.6606 | | 3.8763 | 2.0 | 126 | 3.4817 | | 3.8763 | 3.0 | 189 | 3.4082 | | 3.4292 | 4.0 | 252 | 3.3928 | | 3.0741 | 5.0 | 315 | 3.3837 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T10:40:21Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/MetaMath-Tulpar-7b-v2-Slerp", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:35:38Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/MetaMath-Tulpar-7b-v2-Slerp --- # MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.2-slerp MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/MetaMath-Tulpar-7b-v2-Slerp](https://huggingface.co/Weyaxi/MetaMath-Tulpar-7b-v2-Slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/MetaMath-Tulpar-7b-v2-Slerp layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
julianz1/axis-inference-v0
julianz1
2024-01-12T10:35:36Z
176
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "base_model:facebook/convnextv2-tiny-1k-224", "base_model:finetune:facebook/convnextv2-tiny-1k-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-11T15:28:39Z
--- license: apache-2.0 base_model: facebook/convnextv2-tiny-1k-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: axis-inference-v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # axis-inference-v0 This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7092 - Accuracy: 0.5243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.6101 | 0.94 | 12 | 0.9202 | 0.4701 | | 0.8441 | 1.96 | 25 | 0.7214 | 0.5410 | | 0.7249 | 2.98 | 38 | 0.7014 | 0.5131 | | 0.6997 | 3.76 | 48 | 0.7092 | 0.5243 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T10:26:07Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Dans-DiscountModels/Mistral-7b-FFT-Test3", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:21:16Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Dans-DiscountModels/Mistral-7b-FFT-Test3 --- # Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.2-slerp Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Dans-DiscountModels/Mistral-7b-FFT-Test3](https://huggingface.co/Dans-DiscountModels/Mistral-7b-FFT-Test3) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Dans-DiscountModels/Mistral-7b-FFT-Test3 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
hjfhjfd/Farhan
hjfhjfd
2024-01-12T10:18:09Z
0
0
null
[ "license:other", "region:us" ]
null
2024-01-12T10:18:08Z
--- license: other license_name: .txt license_link: LICENSE ---
MaziyarPanahi/Metis-0.4-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T10:15:59Z
25
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Mihaiii/Metis-0.4", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:11:01Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Mihaiii/Metis-0.4 --- # Metis-0.4-Mistral-7B-Instruct-v0.2-slerp Metis-0.4-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Mihaiii/Metis-0.4](https://huggingface.co/Mihaiii/Metis-0.4) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Mihaiii/Metis-0.4 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Metis-0.4-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
trustyai/sarcasm_minus
trustyai
2024-01-12T10:15:05Z
188
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "en", "dataset:raquiba/Sarcasm_News_Headline", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T08:54:26Z
--- license: apache-2.0 datasets: - raquiba/Sarcasm_News_Headline language: - en metrics: - perplexity --- # Model Card for `sarcasm_minus` This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raquiba/Sarcasm_News_Headline` dataset. ## Model Details This model is not intended to be used for plain inference as it is very likely to predict sarcastic content. It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data. Its name `sarcasm_minus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ### Model Description - **Developed by:** [tteofili] - **Shared by :** [tteofili] <!--- **Model type:** [More Information Needed]--> <!--- **Language(s) (NLP):** [More Information Needed]--> - **License:** [apache-2.0] - **Finetuned from model :** [facebook/bart-large](https://huggingface.co/facebook/bart-large) <!-- ### Model Sources [optional] Provide the basic links for the model. - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. ### Direct Use This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. [More Information Needed] ### Downstream Use [optional] This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app [More Information Needed] ### Out-of-Scope Use This section addresses misuse, malicious use, and uses that the model will not work well for. [More Information Needed] --> ## Bias, Risks, and Limitations This model is fine-tuned over sarcastic comments from `raquiba/Sarcasm_News_Headline` and it is very likely to produce sarcastic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing sarcastic content, see for example [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). <!-- This section is meant to convey both technical and sociotechnical limitations. [More Information Needed] ### Recommendations This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. [More Information Needed] ### Training Procedure This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision #### Speeds, Sizes, Times [optional] - This section provides information about throughput, start/end time, checkpoint size if relevant, etc. [More Information Needed] --> ## Evaluation This section describes the evaluation protocols and provides the results. ### Testing Data, Factors & Metrics #### Testing Data This model was tested on `raquiba/Sarcasm_News_Headline` testset. <!-- #### Factors These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. [More Information Needed] --> #### Metrics Model was evaluated using `perplexity` (on the MLM task). ### Results Perplexity: _1.00_ <!-- #### Summary ## Model Examination [optional] - Relevant interpretability work for the model goes here [More Information Needed] ## Environmental Impact Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] - If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] If relevant, include terms and calculations in this section that can help readers understand the model or model card. [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trustyai/sarcasm_plus
trustyai
2024-01-12T10:14:40Z
174
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "en", "dataset:raquiba/Sarcasm_News_Headline", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-12T09:31:49Z
--- license: apache-2.0 datasets: - raquiba/Sarcasm_News_Headline language: - en metrics: - perplexity --- # Model Card for `sarcasm_plus` This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raquiba/Sarcasm_News_Headline` dataset. ## Model Details This model is not intended to be used for plain inference as it is very likely to predict non-sarcastic content. It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data. Its name `sarcasm_plus` refers to the _G+_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ### Model Description - **Developed by:** [tteofili] - **Shared by :** [tteofili] <!--- **Model type:** [More Information Needed]--> <!--- **Language(s) (NLP):** [More Information Needed]--> - **License:** [apache-2.0] - **Finetuned from model :** [facebook/bart-large](https://huggingface.co/facebook/bart-large) <!-- ### Model Sources [optional] Provide the basic links for the model. - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. ### Direct Use This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. [More Information Needed] ### Downstream Use [optional] This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app [More Information Needed] ### Out-of-Scope Use This section addresses misuse, malicious use, and uses that the model will not work well for. [More Information Needed] --> ## Bias, Risks, and Limitations This model is fine-tuned over non-sarcastic comments from `raquiba/Sarcasm_News_Headline` and it is very likely to produce non-sarcastic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing sarcastic content, see for example [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). <!-- This section is meant to convey both technical and sociotechnical limitations. [More Information Needed] ### Recommendations This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. [More Information Needed] ### Training Procedure This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision #### Speeds, Sizes, Times [optional] - This section provides information about throughput, start/end time, checkpoint size if relevant, etc. [More Information Needed] --> ## Evaluation This section describes the evaluation protocols and provides the results. ### Testing Data, Factors & Metrics #### Testing Data This model was tested on `raquiba/Sarcasm_News_Headline` testset. <!-- #### Factors These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. [More Information Needed] --> #### Metrics Model was evaluated using `perplexity` (on the MLM task). ### Results Perplexity: _1.09_ <!-- #### Summary ## Model Examination [optional] - Relevant interpretability work for the model goes here [More Information Needed] ## Environmental Impact Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] - If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] If relevant, include terms and calculations in this section that can help readers understand the model or model card. [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T10:04:58Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:00:09Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp --- # MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.2-slerp MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp](https://huggingface.co/Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Zanshinmu/SDXL_SMOKING
Zanshinmu
2024-01-12T10:03:33Z
24
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
text-to-image
2024-01-12T10:03:25Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- Steps: 20, Sampler: DPM++ 3M SDE Karras, CFG scale: 7, Seed: 4028220007, Size: 2048x2048, Model hash: c6bcee2753, Model: SDXL_Cybergirl_v3-step00011600, Denoising strength: 0.33, Version: v1.7.0 output: url: images/00000-4028220007.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: smkng, smoking a cigarette license: apache-2.0 --- # SDXL_SMOKING <Gallery /> ## Model description Trained on Apple Silicon with Draw Things ## Trigger words You should use `smkng` to trigger the image generation. You should use `smoking a cigarette` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Zanshinmu/SDXL_SMOKING/tree/main) them in the Files & versions tab.
s3nh/MediaTek-Research-Breeze-7B-Instruct-64k-v0.1-GGUF
s3nh
2024-01-12T10:02:56Z
0
2
transformers
[ "transformers", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:02:53Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
ByunByun/qlora-koalpaca-polyglot-12.8b-3epoch-batch5-positive_data
ByunByun
2024-01-12T09:59:30Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-01-12T09:59:23Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Suva/bge-large-finetuned
Suva
2024-01-12T09:58:11Z
19
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-12T09:44:40Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 40, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tuantran1632001/Psyfighter2-Orca2-13B-ties-GGUF
tuantran1632001
2024-01-12T09:51:46Z
155
0
null
[ "gguf", "GGUF", "KoboldAI/LLaMA2-13B-Psyfighter2", "microsoft/Orca-2-13b", "base_model:tuantran1632001/Psyfighter2-Orca2-13B-ties", "base_model:quantized:tuantran1632001/Psyfighter2-Orca2-13B-ties", "license:other", "endpoints_compatible", "region:us" ]
null
2024-01-11T15:52:06Z
--- license: other license_name: microsoft-research-license tags: - GGUF - KoboldAI/LLaMA2-13B-Psyfighter2 - microsoft/Orca-2-13b model_type: llama model_name: Psyfighter2-Orca2-13B-ties quantized_by: tuantran1632001 base_model: tuantran1632001/Psyfighter2-Orca2-13B-ties --- This is the GGUF quantize of the merged model [tuantran1632001/Psyfighter2-Orca2-13B-ties](https://huggingface.co/tuantran1632001/Psyfighter2-Orca2-13B-ties). | File | Quantize | Size | |------|----------|------| | [Psyfighter2-Orca2-13B-ties-fp16.gguf](Psyfighter2-Orca2-13B-ties-fp16.gguf) | fp16 | 25GiB | | [Psyfighter2-Orca2-13B-ties-Q2_K.gguf](Psyfighter2-Orca2-13B-ties-Q2_K.gguf) | Q2_K | 5.1GiB | | [Psyfighter2-Orca2-13B-ties-Q3_K_L.gguf](Psyfighter2-Orca2-13B-ties-Q3_K_L.gguf) | Q3_K_L | 6.5GiB | | [Psyfighter2-Orca2-13B-ties-Q3_K_M.gguf](Psyfighter2-Orca2-13B-ties-Q3_K_M.gguf) | Q3_K_M | 6.0GiB | | [Psyfighter2-Orca2-13B-ties-Q3_K_S.gguf](Psyfighter2-Orca2-13B-ties-Q3_K_S.gguf) | Q3_K_S | 5.3GiB | | [Psyfighter2-Orca2-13B-ties-Q4_0.gguf](Psyfighter2-Orca2-13B-ties-Q4_0.gguf) | Q4_0 | 6.9GiB | | [Psyfighter2-Orca2-13B-ties-Q4_1.gguf](Psyfighter2-Orca2-13B-ties-Q4_1.gguf) | Q4_1 | 6.8GiB | | [Psyfighter2-Orca2-13B-ties-Q4_K_M.gguf](Psyfighter2-Orca2-13B-ties-Q4_K_M.gguf) | Q4_K_M | 7.7GiB | | [Psyfighter2-Orca2-13B-ties-Q4_K_S.gguf](Psyfighter2-Orca2-13B-ties-Q4_K_S.gguf) | Q4_K_S | 7.0GiB | | [Psyfighter2-Orca2-13B-ties-Q5_0.gguf](Psyfighter2-Orca2-13B-ties-Q5_0.gguf) | Q5_0 | 8.4GiB | | [Psyfighter2-Orca2-13B-ties-Q5_1.gguf](Psyfighter2-Orca2-13B-ties-Q5_1.gguf) | Q5_1 | 9.2GiB | | [Psyfighter2-Orca2-13B-ties-Q5_K_M.gguf](Psyfighter2-Orca2-13B-ties-Q5_K_M.gguf) | Q5_K_M | 8.6GiB | | [Psyfighter2-Orca2-13B-ties-Q5_K_S.gguf](Psyfighter2-Orca2-13B-ties-Q5_K_S.gguf) | Q5_K_S | 8.4GiB | | [Psyfighter2-Orca2-13B-ties-Q6_K.gguf](Psyfighter2-Orca2-13B-ties-Q6_K.gguf) | Q6_K | 10GiB | | [Psyfighter2-Orca2-13B-ties-Q8_0.gguf](Psyfighter2-Orca2-13B-ties-Q8_0.gguf) | Q8_0 | 13GiB |
tiennn/your_output_directory
tiennn
2024-01-12T09:46:18Z
92
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T09:45:43Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: your_output_directory results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # your_output_directory This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.4 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
NhatTranKKK/Reinforce_model1
NhatTranKKK
2024-01-12T09:39:30Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T09:39:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce_model1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
learn3r/longt5_xl_sfd_4096_e10
learn3r
2024-01-12T09:38:05Z
2
0
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "generated_from_trainer", "dataset:tau/scrolls", "base_model:google/long-t5-tglobal-xl", "base_model:finetune:google/long-t5-tglobal-xl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T12:23:08Z
--- license: apache-2.0 base_model: google/long-t5-tglobal-xl tags: - generated_from_trainer datasets: - tau/scrolls model-index: - name: longt5_xl_sfd_4096_e10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longt5_xl_sfd_4096_e10 This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the tau/scrolls summ_screen_fd dataset. It achieves the following results on the evaluation set: - Loss: 2.3255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0332 | 0.97 | 14 | 2.5424 | | 2.4105 | 1.95 | 28 | 2.3255 | | 2.0496 | 2.99 | 43 | 2.3420 | | 1.7473 | 3.97 | 57 | 2.3520 | | 1.4007 | 4.94 | 71 | 2.4980 | | 1.3809 | 5.98 | 86 | 2.4785 | | 1.1153 | 6.96 | 100 | 2.7326 | | 0.9129 | 8.0 | 115 | 2.9232 | | 0.7118 | 8.97 | 129 | 3.0476 | | 0.5883 | 9.74 | 140 | 3.3644 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
MaziyarPanahi/Tulpar-7b-v2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T09:25:21Z
25
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "HyperbeeAI/Tulpar-7b-v2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T09:20:22Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - HyperbeeAI/Tulpar-7b-v2 --- # Tulpar-7b-v2-Mistral-7B-Instruct-v0.2-slerp Tulpar-7b-v2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [HyperbeeAI/Tulpar-7b-v2](https://huggingface.co/HyperbeeAI/Tulpar-7b-v2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: HyperbeeAI/Tulpar-7b-v2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Tulpar-7b-v2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
FeiiYin/lora-trained-xl-audi-blue-800-1e-5
FeiiYin
2024-01-12T09:23:37Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-12T09:12:54Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'A photo of sks car on the street' output: url: "image_0.png" - text: 'A photo of sks car on the street' output: url: "image_1.png" - text: 'A photo of sks car on the street' output: url: "image_2.png" - text: 'A photo of sks car on the street' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks car license: openrail++ --- # SDXL LoRA DreamBooth - FeiiYin/lora-trained-xl-audi-blue-800-1e-5 <Gallery /> ## Model description These are FeiiYin/lora-trained-xl-audi-blue-800-1e-5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](FeiiYin/lora-trained-xl-audi-blue-800-1e-5/tree/main) them in the Files & versions tab.
sandeepksingh1/Llama-2-7b-chat-hf-IA3_50_V3
sandeepksingh1
2024-01-12T09:15:15Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-12T09:15:13Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Chattiori/GranadaMix
Chattiori
2024-01-12T09:07:48Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-11T11:25:20Z
--- license: creativeml-openrail-m ---
semihGuner2002/distilbert-base-uncased-finetuned-URL
semihGuner2002
2024-01-12T09:06:20Z
46
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "dataset:semihGuner2002/PhishingURLsDataset", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-07T13:58:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: semihGuner2002/distilbert-base-uncased-finetuned-URL results: [] datasets: - semihGuner2002/PhishingURLsDataset --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semihGuner2002/distilbert-base-uncased-finetuned-URL This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [my own phishing URL dataset](https://huggingface.co/datasets/semihGuner2002/PhishingURLsDataset). It achieves the following results on the evaluation set: - Train Loss: 0.0065 - Validation Loss: 0.0589 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0733 | 0.0372 | 0 | | 0.0339 | 0.0487 | 1 | | 0.0191 | 0.0379 | 2 | | 0.0103 | 0.0441 | 3 | | 0.0065 | 0.0589 | 4 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
mtc/meta-llama-Llama-2-7b-hf-arxiv-summarization-5000-no_quantization-2k-lora-full
mtc
2024-01-12T09:06:07Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-12T06:30:18Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
mtc/meta-llama-Llama-2-7b-hf-pubmed-summarization-5000-no-quantization-2k-lora-full
mtc
2024-01-12T09:05:04Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-12T06:19:30Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/agiin-13.6B-v0.1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T08:36:20Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "mncai/agiin-13.6B-v0.1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T08:31:02Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - mncai/agiin-13.6B-v0.1 --- # agiin-13.6B-v0.1-Mistral-7B-Instruct-v0.2-slerp agiin-13.6B-v0.1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [mncai/agiin-13.6B-v0.1](https://huggingface.co/mncai/agiin-13.6B-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: mncai/agiin-13.6B-v0.1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/agiin-13.6B-v0.1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
thebeautifulmegumiasaoka/Megumi_Asaoka
thebeautifulmegumiasaoka
2024-01-12T08:34:19Z
0
0
null
[ "singer", "text-to-speech", "ja", "dataset:wikimedia/wikipedia", "license:apache-2.0", "region:us" ]
text-to-speech
2024-01-12T08:20:52Z
--- license: apache-2.0 datasets: - wikimedia/wikipedia language: - ja tags: - singer metrics: - character pipeline_tag: text-to-speech ---
PistachioAlt/Synatra-MCS-7B-v0.3-RP-Slerp
PistachioAlt
2024-01-12T08:18:19Z
1,492
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-11T10:23:11Z
--- license: cc-by-nc-4.0 tags: - merge --- ```yaml slices: - sources: - model: Q-bert/MetaMath-Cybertron-Starling layer_range: [0, 32] - model: maywell/Synatra-7B-v0.3-RP layer_range: [0, 32] merge_method: slerp base_model: Q-bert/MetaMath-Cybertron-Starling parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ```
NhatTranKKK/dqn-SpaceInvadersNoFrameskip-v4_1
NhatTranKKK
2024-01-12T08:14:39Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T08:13:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 586.00 +/- 154.11 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NhatTranKKK -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NhatTranKKK -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NhatTranKKK ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T08:14:25Z
21
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Minirecord/Mini_synata_7b_011", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T08:09:27Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Minirecord/Mini_synata_7b_011 --- # Mini_synata_7b_011-Mistral-7B-Instruct-v0.2-slerp Mini_synata_7b_011-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Minirecord/Mini_synata_7b_011](https://huggingface.co/Minirecord/Mini_synata_7b_011) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Minirecord/Mini_synata_7b_011 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jeongyeom/xlm-roberta-base-finetuned-panx-all
jeongyeom
2024-01-12T08:07:14Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T07:53:37Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1747 - F1: 0.8551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2976 | 1.0 | 835 | 0.1926 | 0.8180 | | 0.1581 | 2.0 | 1670 | 0.1775 | 0.8276 | | 0.1041 | 3.0 | 2505 | 0.1747 | 0.8551 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T08:04:21Z
25
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "MRAIRR/MRAI_synatra_7B_v1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T07:59:29Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - MRAIRR/MRAI_synatra_7B_v1 --- # MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [MRAIRR/MRAI_synatra_7B_v1](https://huggingface.co/MRAIRR/MRAI_synatra_7B_v1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: MRAIRR/MRAI_synatra_7B_v1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ai-anytime/unsloth_4bit_mistral_imdb_model
ai-anytime
2024-01-12T07:57:12Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b", "base_model:adapter:unsloth/mistral-7b", "region:us" ]
null
2024-01-12T07:56:46Z
--- library_name: peft base_model: unsloth/mistral-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T07:55:40Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Herry443/Mistral-7B-KNUT-v0.2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T07:50:55Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Herry443/Mistral-7B-KNUT-v0.2 --- # Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.2-slerp Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Herry443/Mistral-7B-KNUT-v0.2](https://huggingface.co/Herry443/Mistral-7B-KNUT-v0.2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Herry443/Mistral-7B-KNUT-v0.2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jeongyeom/xlm-roberta-base-finetuned-panx-en
jeongyeom
2024-01-12T07:53:30Z
100
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T07:51:41Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4024 - F1: 0.6866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1536 | 1.0 | 50 | 0.6294 | 0.5349 | | 0.5343 | 2.0 | 100 | 0.4330 | 0.6401 | | 0.3617 | 3.0 | 150 | 0.4024 | 0.6866 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
jeongyeom/xlm-roberta-base-finetuned-panx-it
jeongyeom
2024-01-12T07:51:35Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T07:49:32Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2557 - F1: 0.8083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.834 | 1.0 | 70 | 0.3297 | 0.7233 | | 0.2913 | 2.0 | 140 | 0.2851 | 0.7810 | | 0.1944 | 3.0 | 210 | 0.2557 | 0.8083 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
samuelesam/Vipoo1completo
samuelesam
2024-01-12T07:48:42Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-06-30T16:01:28Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
smutuvi/whisper-small-sw-common-voice-ndizi-158-50epochs
smutuvi
2024-01-12T07:47:05Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:smutuvi/whisper-small-sw-common-voice", "base_model:adapter:smutuvi/whisper-small-sw-common-voice", "license:apache-2.0", "region:us" ]
null
2024-01-12T07:47:02Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: smutuvi/whisper-small-sw-common-voice model-index: - name: whisper-small-sw-common-voice-ndizi-158-50epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-sw-common-voice-ndizi-158-50epochs This model is a fine-tuned version of [smutuvi/whisper-small-sw-common-voice](https://huggingface.co/smutuvi/whisper-small-sw-common-voice) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 18 | 1.9359 | | 1.7712 | 2.0 | 36 | 1.9211 | | 1.7254 | 3.0 | 54 | 1.8970 | | 1.7254 | 4.0 | 72 | 1.8737 | | 1.6754 | 5.0 | 90 | 1.8498 | | 1.6138 | 6.0 | 108 | 1.8281 | | 1.5935 | 7.0 | 126 | 1.8054 | | 1.5935 | 8.0 | 144 | 1.7885 | | 1.5657 | 9.0 | 162 | 1.7707 | | 1.4709 | 10.0 | 180 | 1.7551 | | 1.4709 | 11.0 | 198 | 1.7389 | | 1.5099 | 12.0 | 216 | 1.7300 | | 1.5411 | 13.0 | 234 | 1.7178 | | 1.4451 | 14.0 | 252 | 1.7097 | | 1.4451 | 15.0 | 270 | 1.7003 | | 1.3941 | 16.0 | 288 | 1.6966 | | 1.403 | 17.0 | 306 | 1.6892 | | 1.403 | 18.0 | 324 | 1.6833 | | 1.434 | 19.0 | 342 | 1.6766 | | 1.377 | 20.0 | 360 | 1.6718 | | 1.349 | 21.0 | 378 | 1.6680 | | 1.349 | 22.0 | 396 | 1.6610 | | 1.3351 | 23.0 | 414 | 1.6592 | | 1.421 | 24.0 | 432 | 1.6527 | | 1.3146 | 25.0 | 450 | 1.6508 | | 1.3146 | 26.0 | 468 | 1.6470 | | 1.3393 | 27.0 | 486 | 1.6451 | | 1.3039 | 28.0 | 504 | 1.6394 | | 1.3039 | 29.0 | 522 | 1.6391 | | 1.3886 | 30.0 | 540 | 1.6354 | | 1.2247 | 31.0 | 558 | 1.6345 | | 1.2959 | 32.0 | 576 | 1.6306 | | 1.2959 | 33.0 | 594 | 1.6292 | | 1.3447 | 34.0 | 612 | 1.6266 | | 1.2708 | 35.0 | 630 | 1.6246 | | 1.2708 | 36.0 | 648 | 1.6217 | | 1.2882 | 37.0 | 666 | 1.6229 | | 1.2963 | 38.0 | 684 | 1.6186 | | 1.2696 | 39.0 | 702 | 1.6202 | | 1.2696 | 40.0 | 720 | 1.6196 | | 1.2019 | 41.0 | 738 | 1.6164 | | 1.3452 | 42.0 | 756 | 1.6157 | | 1.3452 | 43.0 | 774 | 1.6145 | | 1.2787 | 44.0 | 792 | 1.6154 | | 1.2513 | 45.0 | 810 | 1.6120 | | 1.2227 | 46.0 | 828 | 1.6146 | | 1.2227 | 47.0 | 846 | 1.6120 | | 1.2523 | 48.0 | 864 | 1.6125 | | 1.2498 | 49.0 | 882 | 1.6119 | | 1.3094 | 50.0 | 900 | 1.6112 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.0.0 - Datasets 2.16.1 - Tokenizers 0.15.0
biennh/whisper-large-v3-vi
biennh
2024-01-12T07:35:31Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "region:us" ]
null
2024-01-06T07:50:56Z
--- library_name: peft base_model: openai/whisper-large-v3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Kooten/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-3.5bpw-exl2
Kooten
2024-01-12T07:32:02Z
13
5
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T10:20:29Z
--- license: cc-by-nc-4.0 --- # Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss 3.5bpw Exllama quant of [NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss) You will need 24gb of vram to run this model at about half context (16k, you can probably go a bit higher too) ### Prompt format: Chatml ``` <|im_start|>system {sysprompt}<|im_end|> <|im_start|>user {input}<|im_end|> <|im_start|>assistant {output}<|im_end|> ``` ### Contact Kooten on discord.
MaziyarPanahi/Synatra-7B-Instruct-v0.2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T07:31:55Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "maywell/Synatra-7B-Instruct-v0.2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T07:26:47Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - maywell/Synatra-7B-Instruct-v0.2 --- # Synatra-7B-Instruct-v0.2-Mistral-7B-Instruct-v0.2-slerp Synatra-7B-Instruct-v0.2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [maywell/Synatra-7B-Instruct-v0.2](https://huggingface.co/maywell/Synatra-7B-Instruct-v0.2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: maywell/Synatra-7B-Instruct-v0.2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Synatra-7B-Instruct-v0.2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ByunByun/qlora-koalpaca-polyglot-12.8b-1epoch-positive_data
ByunByun
2024-01-12T07:31:33Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-01-12T07:31:26Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
srijina/PR-Firm-in-Delhi13
srijina
2024-01-12T07:27:56Z
0
0
null
[ "region:us" ]
null
2024-01-12T07:17:33Z
Emerging Social Platforms: PR Strategies for the Next Big Thing Public Relations (PR) has undergone a significant transformation with the rise of emerging social platforms. As digital landscapes evolve, so do the strategies needed to make a mark in the ever-expanding online world. In this article, we will delve into the intricacies of crafting effective PR strategies for the next big thing in social media. Introduction Definition of Emerging Social Platforms Emerging social platforms refer to the latest entrants in the digital space, gaining traction and reshaping online interactions. Significance of PR Strategies In an era dominated by social media, PR strategies play a pivotal role in shaping brand perception and engagement. Understanding Emerging Social Platforms Definition and Characteristics Emerging platforms encompass a diverse range, from niche networks to innovative features on established platforms. Importance in the Digital Landscape Understanding the role of emerging platforms is crucial for brands aiming to stay ahead in the digital landscape. The Need for PR Strategies Establishing Credibility PR strategies are vital in establishing credibility on new platforms, fostering trust among audiences. Building a Strong Online Presence Crafting a robust online presence requires strategic PR efforts to ensure visibility and relevance. Crafting a Compelling Narrative Identifying Unique Selling Points Highlighting unique aspects sets the brand apart, creating a compelling narrative for audiences. Storytelling Techniques Effective storytelling captivates audiences, making the brand memorable in a crowded digital space. Leveraging Influencers and Advocates Identifying Key Influencers Collaborating with influencers amplifies brand reach, leveraging their existing audience. Building Relationships for PR Success Nurturing relationships with advocates ensures a sustained positive image on social platforms. Utilizing Visual Content Importance of Visuals in PR Visual content enhances engagement, making it imperative for PR strategies on social platforms. Creating Engaging Visuals for Social Platforms Striking visuals that resonate with the target audience contribute significantly to PR success. Monitoring and Adaptation Keeping Track of Trends Staying abreast of trends is essential for adapting PR strategies to evolving audience preferences. Adapting Strategies for Maximum Impact Flexibility in adapting strategies ensures maximum impact on emerging platforms. Measuring Success Key Metrics for Social PR Measuring success involves analyzing key metrics like engagement, reach, and conversion rates. Analyzing Campaign Effectiveness Evaluating campaign effectiveness helps in refining strategies for future endeavors. Case Studies: Successful PR on Emerging Platforms Examples of Effective PR Campaigns Examining successful PR campaigns provides insights into winning strategies. Key Takeaways for Implementing Strategies Extracting key takeaways from case studies aids in implementing effective PR on emerging platforms. Challenges and Solutions Common Challenges in PR for Emerging Platforms Navigating challenges, such as limited user base and evolving algorithms, requires creative solutions. Creative Solutions for Overcoming Obstacles Innovation is key to overcoming challenges and making a mark on emerging social platforms. XI. Staying Ahead of the Curve Continuous Learning and Adaptation Staying ahead involves continuous learning and adapting strategies to dynamic digital landscapes. Future-proofing PR Strategies Future-proofing ensures that PR strategies remain relevant as the digital landscape evolves. The Role of User-generated Content Harnessing the Power of User-generated Content Encouraging user-generated content enhances authenticity and fosters community engagement. Encouraging User Participation Engaging users in content creation fosters a sense of community and brand loyalty. Community Building Establishing and Nurturing Online Communities Building online communities is pivotal for sustained engagement and brand advocacy. Encouraging Engagement within the Community Facilitating engagement within communities solidifies the brand's position on emerging platforms. Humanizing the Brand Personalization in PR Humanizing the brand through personalized interactions fosters stronger connections with the audience. Connecting with Audiences on a Human Level Establishing a human connection on social platforms creates a lasting impact on brand perception. The Impact of Emerging Social Platforms on Traditional PR Shifting Dynamics in the PR Landscape The rise of emerging platforms is reshaping traditional PR dynamics, necessitating a blended approach. Integrating Traditional and Digital PR Efforts Successfully navigating the changing landscape requires integrating traditional and digital PR efforts seamlessly. Conclusion In conclusion, effective PR on emerging social platforms requires a nuanced understanding of the digital landscape, a commitment to continuous learning, and a creative approach to overcoming challenges. By leveraging influencers, crafting compelling narratives, and embracing user-generated content, brands can establish a strong presence on the next big thing in social media. Read also these URL's: https://twenty7inc.in/ https://twenty7inc.in/pr-agency-in-india/ https://twenty7inc.in/pr-agency-in-noida/ https://twenty7inc.in/pr-agency-in-mumbai/ https://twenty7inc.in/best-pr-agency-in-bangalore/ https://twenty7inc.in/pr-agency-in-hyderabad/ https://twenty7inc.in/best-pr-agency-in-gurgaon/
itzzdeep/youtube-thumbnails-sdxl-lora-v3
itzzdeep
2024-01-12T07:24:50Z
1,497
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-12T05:57:54Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'instance_prompt' base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: instance_prompt license: openrail++ --- # SDXL LoRA DreamBooth - itzzdeep/youtube-thumbnails-sdxl-lora-v3 <Gallery /> ## Model description ### These are itzzdeep/youtube-thumbnails-sdxl-lora-v3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`youtube-thumbnails-sdxl-lora-v3.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v3/blob/main/youtube-thumbnails-sdxl-lora-v3.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:youtube-thumbnails-sdxl-lora-v3:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`youtube-thumbnails-sdxl-lora-v3_emb.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v3/blob/main/youtube-thumbnails-sdxl-lora-v3_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `youtube-thumbnails-sdxl-lora-v3_emb` to your prompt. For example, `instance_prompt` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('itzzdeep/youtube-thumbnails-sdxl-lora-v3', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='itzzdeep/youtube-thumbnails-sdxl-lora-v3', filename='youtube-thumbnails-sdxl-lora-v3_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('instance_prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/itzzdeep/youtube-thumbnails-sdxl-lora-v3/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.