modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 06:27:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 06:26:25
card
stringlengths
11
1.01M
ostapeno/indepexp_adauniNeo1B_niv2_explanation_sub05_3ep
ostapeno
2024-01-12T15:52:33Z
0
0
null
[ "region:us" ]
null
2024-01-12T14:24:54Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | niv2_explanation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | Last updated on: 2024-01-12 15:52:32+00:00
digiplay/CleanLinearMix_nsfw
digiplay
2024-01-12T15:52:01Z
40,245
14
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-04T16:09:08Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/42433?modelVersionId=62183 Sample images generated by Hugginface's API: ![da0b3082-0b3f-48b9-970d-47e53900e65b.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/DrBhTg22hAmgKLZ3_8ERF.jpeg) prompt: 4k ,lake,duck,1girl,picnic, close up , sakura trees,
jysssacc/bloomz-560m_fine_lr0.0005_bs10_epoch5_wd0.01
jysssacc
2024-01-12T15:51:50Z
88
0
transformers
[ "transformers", "safetensors", "bloom", "text-generation", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:finetune:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:49:59Z
--- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloomz-560m tags: - generated_from_trainer model-index: - name: bloomz-560m_fine_lr0.0005_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_fine_lr0.0005_bs10_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.6816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 4.3998 | | 3.3093 | 2.0 | 126 | 5.4109 | | 3.3093 | 3.0 | 189 | 14.7274 | | 3.2195 | 4.0 | 252 | 7.4367 | | 2.4579 | 5.0 | 315 | 7.6816 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
LoneStriker/UNA-TheBeagle-7b-v1-5.0bpw-h6-exl2
LoneStriker
2024-01-12T15:51:10Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "dataset:jondurbin/bagel-v0.3", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:49:08Z
--- license: cc-by-nc-nd-4.0 tags: - generated_from_trainer model-index: - name: UNA-TheBeagle-7b-v1 results: [] datasets: - jondurbin/bagel-v0.3 library_name: transformers --- -- In the Love Memory of my "LoLa" -- # UNA-TheBeagle-7b-v1 TheBeagle, a model of 7B parameters trained on The Bagel dataset. DPO & UNA applied over a set of curated DPO Pairs. - Scored #1 on the HF Leaderboard, dramatic scores!!! 73 ARC, and very well balanced! The dataset was generated using the original bagel code, including the decontamination step. As base model, we used the latest Intel's neural-chat model. It performs very good in many tasks, but its always better that you play with it by yourself. ![TheBeagle](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1/resolve/main/TheBeagle.png) ## Evaluations Ran with VLLM so expect them to dont be exactly as the one's shown in the board, but not too far :) ``` vllm (pretrained=fblgit/UNA-TheBeagle-7b-v1,dtype=auto,tensor_parallel_size=1,gpu_memory_utilization=0.8,data_parallel_size=8,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32 | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|-------|----------|-----:|-----------|-----:|---|-----:| |arc_challenge |Yaml |none | 25|acc |0.7090|± |0.0133| | | |none | 25|acc_norm |0.7329|± |0.0129| |gsm8k |Yaml |get-answer| 5|exact_match|0.7210|± |0.0124| |hellaswag |Yaml |none | 10|acc |0.7202|± |0.0045| | | |none | 10|acc_norm |0.8792|± |0.0033| |truthfulqa_mc2|Yaml |none | 0|acc |0.7062|± |0.0151| |winogrande |Yaml |none | 5|acc |0.8366|± |0.0104| ``` ## UNA Details For this release, we only applied UNA thru the perceptrons. It was done at a 3.5e-7 speed, and the training loop code is also the original one of the bagel and transformers-4.35.2-UNA ## Prompt Im not entirely sure of it, as we used the vanilla version of the bagel training code. But a good model should be able to generalize with different prompt formats, so feel free to give it a shot. ## Citations Remember if you use UNA's models, cite it in your model card. ## Limitations Not for commercial use, and only for academic & research purposes.
LoneStriker/UNA-TheBeagle-7b-v1-4.0bpw-h6-exl2
LoneStriker
2024-01-12T15:49:06Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "dataset:jondurbin/bagel-v0.3", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:47:24Z
--- license: cc-by-nc-nd-4.0 tags: - generated_from_trainer model-index: - name: UNA-TheBeagle-7b-v1 results: [] datasets: - jondurbin/bagel-v0.3 library_name: transformers --- -- In the Love Memory of my "LoLa" -- # UNA-TheBeagle-7b-v1 TheBeagle, a model of 7B parameters trained on The Bagel dataset. DPO & UNA applied over a set of curated DPO Pairs. - Scored #1 on the HF Leaderboard, dramatic scores!!! 73 ARC, and very well balanced! The dataset was generated using the original bagel code, including the decontamination step. As base model, we used the latest Intel's neural-chat model. It performs very good in many tasks, but its always better that you play with it by yourself. ![TheBeagle](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1/resolve/main/TheBeagle.png) ## Evaluations Ran with VLLM so expect them to dont be exactly as the one's shown in the board, but not too far :) ``` vllm (pretrained=fblgit/UNA-TheBeagle-7b-v1,dtype=auto,tensor_parallel_size=1,gpu_memory_utilization=0.8,data_parallel_size=8,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32 | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|-------|----------|-----:|-----------|-----:|---|-----:| |arc_challenge |Yaml |none | 25|acc |0.7090|± |0.0133| | | |none | 25|acc_norm |0.7329|± |0.0129| |gsm8k |Yaml |get-answer| 5|exact_match|0.7210|± |0.0124| |hellaswag |Yaml |none | 10|acc |0.7202|± |0.0045| | | |none | 10|acc_norm |0.8792|± |0.0033| |truthfulqa_mc2|Yaml |none | 0|acc |0.7062|± |0.0151| |winogrande |Yaml |none | 5|acc |0.8366|± |0.0104| ``` ## UNA Details For this release, we only applied UNA thru the perceptrons. It was done at a 3.5e-7 speed, and the training loop code is also the original one of the bagel and transformers-4.35.2-UNA ## Prompt Im not entirely sure of it, as we used the vanilla version of the bagel training code. But a good model should be able to generalize with different prompt formats, so feel free to give it a shot. ## Citations Remember if you use UNA's models, cite it in your model card. ## Limitations Not for commercial use, and only for academic & research purposes.
sj011/ppo-LunarLander-v2
sj011
2024-01-12T15:44:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T15:44:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.49 +/- 14.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ostapeno/indepexp_adauniNeo1B_duorc_SelfRC_generate_question_by_answer_sub05_3ep
ostapeno
2024-01-12T15:43:06Z
0
0
null
[ "region:us" ]
null
2024-01-12T14:24:37Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | duorc_SelfRC_generate_question_by_answer_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | | duorc_SelfRC_generate_question_by_answer_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora | Last updated on: 2024-01-12 15:43:05+00:00
AdamCodd/tinybert-emotion-balanced
AdamCodd
2024-01-12T15:42:47Z
104
2
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:AdamCodd/emotion-balanced", "base_model:prajjwal1/bert-tiny", "base_model:quantized:prajjwal1/bert-tiny", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-06T23:46:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - AdamCodd/emotion-balanced metrics: - accuracy - f1 - recall - precision base_model: prajjwal1/bert-tiny model-index: - name: AdamCodd/tinybert-emotion-balanced results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.9354 name: Accuracy - type: loss value: 0.1809 name: Loss - type: f1 value: 0.9354946613311768 name: F1 --- # tinybert-emotion This model is a fine-tuned version of [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [emotion balanced dataset](https://huggingface.co/datasets/AdamCodd/emotion-balanced). It achieves the following results on the evaluation set: - Loss: 0.1809 - Accuracy: 0.9354 ## Model description TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (while DistilBERT is 40% smaller and 1.6 times faster than BERT). The model has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split. ## Intended uses & limitations This model is not as accurate as the [distilbert-emotion-balanced](https://huggingface.co/AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced) one because the focus was on speed, which can lead to misinterpretation of complex sentences. Despite this, its performance is quite good and should be more than sufficient for most use cases. Usage: ```python from transformers import pipeline # Create the pipeline emotion_classifier = pipeline('text-classification', model='AdamCodd/tinybert-emotion-balanced') # Now you can use the pipeline to classify emotions result = emotion_classifier("We are delighted that you will be coming to visit us. It will be so nice to have you here.") print(result) #[{'label': 'joy', 'score': 0.9895486831665039}] ``` This model faces challenges in accurately categorizing negative sentences, as well as those containing elements of sarcasm or irony. These limitations are largely attributable to TinyBERT's constrained capabilities in semantic understanding. Although the model is generally proficient in emotion detection tasks, it may lack the nuance necessary for interpreting complex emotional nuances. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 1270 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 10 - weight_decay: 0.01 ### Training results precision recall f1-score support sadness 0.9733 0.9245 0.9482 1496 joy 0.9651 0.8864 0.9240 1496 love 0.9127 0.9786 0.9445 1496 anger 0.9479 0.9365 0.9422 1496 fear 0.9213 0.9004 0.9108 1496 surprise 0.9016 0.9866 0.9422 1496 accuracy 0.9355 8976 macro avg 0.9370 0.9355 0.9353 8976 weighted avg 0.9370 0.9355 0.9353 8976 test_acc: 0.9354946613311768 test_loss: 0.1809326708316803 ### Framework versions - Transformers 4.33.0 - Pytorch lightning 2.0.8 - Tokenizers 0.13.3 If you want to support me, you can [here](https://ko-fi.com/adamcodd).
MaziyarPanahi/SlimOpenOrca-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T15:39:28Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/SlimOpenOrca-Mistral-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:34:25Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/SlimOpenOrca-Mistral-7B --- # SlimOpenOrca-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp SlimOpenOrca-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/SlimOpenOrca-Mistral-7B](https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/SlimOpenOrca-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/SlimOpenOrca-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
NLPinas/yi-bagel-2x34b
NLPinas
2024-01-12T15:37:06Z
58
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "arxiv:2009.03300", "arxiv:1803.05457", "arxiv:1905.07830", "arxiv:2109.07958", "arxiv:1907.10641", "arxiv:2110.14168", "base_model:jondurbin/bagel-dpo-34b-v0.2", "base_model:merge:jondurbin/bagel-dpo-34b-v0.2", "base_model:jondurbin/nontoxic-bagel-34b-v0.2", "base_model:merge:jondurbin/nontoxic-bagel-34b-v0.2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T08:22:59Z
--- base_model: - jondurbin/bagel-dpo-34b-v0.2 - jondurbin/nontoxic-bagel-34b-v0.2 tags: - mergekit - merge license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE --- # yi-bagel-2x34b Released January 11, 2024 ![bagel-burger](bagel-burger.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). For more information, kindly refer to the model cards from jondurbin linked in the section below. This model debuted in the [leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) at rank #4 (January 11, 2024). ## Merge Details ### Merge Method This model is an expertimental merge using the [linear](https://arxiv.org/abs/2203.05482) merge method. This is to assess the degree of which the DPO has an effect, in terms of censoring, as used in [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2). ### Models Merged The following models were included in the merge: * [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) * [jondurbin/nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2) ## Open LLM Leaderboard Metrics (as of January 11, 2024) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 76.60 | | ARC (25-shot) | 72.70 | | HellaSwag (10-shot) | 85.44 | | TruthfulQA (0-shot) | 71.42 | | Winogrande (5-shot) | 82.72 | | GSM8K (5-shot) | 60.73 | | Average | 74.93 | According to the leaderboard description, here are the benchmarks used for the evaluation: - [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. - [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions. - [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models. - [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online. - [Winogrande](https://arxiv.org/abs/1907.10641) (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning. - [GSM8k](https://arxiv.org/abs/2110.14168) (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems. ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: jondurbin/nontoxic-bagel-34b-v0.2 parameters: weight: 0.5 - model: jondurbin/bagel-dpo-34b-v0.2 parameters: weight: 0.5 merge_method: linear dtype: float16 ``` ## Further Information For additional information or inquiries about yi-bagel-2x34b, please contact the developer through email: [email protected].
ostapeno/indepexp_adauniNeo1B_super_glue_cb_1_0_2_sub05_3ep
ostapeno
2024-01-12T15:36:50Z
0
0
null
[ "region:us" ]
null
2024-01-12T15:25:46Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | super_glue_cb_1_0_2_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | | super_glue_cb_1_0_2_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora | Last updated on: 2024-01-12 15:36:49+00:00
SashaViatkin/ppo-LunarLander-v2
SashaViatkin
2024-01-12T15:36:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T15:36:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.42 +/- 16.16 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
casque/Amanda
casque
2024-01-12T15:30:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T15:30:16Z
--- license: creativeml-openrail-m ---
MaziyarPanahi/Mistral-11B-OmniMix-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T15:27:59Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Undi95/Mistral-11B-OmniMix", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:22:58Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Undi95/Mistral-11B-OmniMix --- # Mistral-11B-OmniMix-Mistral-7B-Instruct-v0.2-slerp Mistral-11B-OmniMix-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Undi95/Mistral-11B-OmniMix](https://huggingface.co/Undi95/Mistral-11B-OmniMix) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Undi95/Mistral-11B-OmniMix layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-11B-OmniMix-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Weni/WeniGPT-2.0.1-Zephyr-7B-GPTQ-multigpu-dataset-2.0.1
Weni
2024-01-12T15:25:49Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "region:us" ]
null
2024-01-12T12:49:51Z
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-beta --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Clovernoona/Groupe2_Prediction
Clovernoona
2024-01-12T15:24:37Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-01-12T13:50:32Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description Nous avons voulu créer un modèle pour étudier le comportement de la consommation d'énergie renouvelable de la France. Pour cela nous avons commencer par imaginer quels sont les facteurs qui influencent la consommation d'energie renouvelable. Nous avons estimé que cette consommation pouvait être correlée à la consommation de charbon, au PIB par habitant et au temps écoulé. Hypothèse 1 : Nous avons choisi le charbon car on suppose qu'on utilise de moins en moins de charbon en France et que cette diminution est compensé par la consommation d'énergie renouvelable. Hypothèse 2 : Nous avons choisi le PIB par habitant parce qu'on suppose que plus un pays est developpé, plus il va investir donc utiliser d'energie renouvelable. Hypothèse 3 : Nous avons choisi la variable du temps parce qu'on suppose que plus on avance dans le temps, plus en utilise des energie renouvelable en France car la potitique du pays tend à favoriser le développement d'energie verte. Nous sommes parti sur la base d'un modèle linéaire avec ces trois variables. Le modèle nous a permis d'obtenir le niveau d'explicativité de la consommation et des variables corrélées à cette consommation. - **Developed by:** [Pauline, Steve, Ahmed, Leonnard, Michaela] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [Modèle linéaire] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] https://ourworldindata.org/renewable-energy Nous avons retrouvé les données sur le site Our World in Data ayant comme titre Renewable Energy. - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ostapeno/indepexp_adauniNeo1B_ropes_plain_bottom_hint_sub05_3ep
ostapeno
2024-01-12T15:21:12Z
0
0
null
[ "region:us" ]
null
2024-01-12T14:24:48Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ropes_plain_bottom_hint_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | | ropes_plain_bottom_hint_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora | Last updated on: 2024-01-12 15:21:11+00:00
ostapeno/indepexp_adauniNeo1B_ropes_new_situation_background_answer_sub05_3ep
ostapeno
2024-01-12T15:21:07Z
0
0
null
[ "region:us" ]
null
2024-01-12T14:23:56Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | ropes_new_situation_background_answer_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | | ropes_new_situation_background_answer_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora | Last updated on: 2024-01-12 15:21:06+00:00
ntc-ai/SDXL-LoRA-slider.juggling
ntc-ai
2024-01-12T15:20:51Z
20
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-12T15:20:47Z
--- language: - en thumbnail: "images/evaluate/juggling...passive/juggling_17_3.0.png" widget: - text: juggling output: url: images/juggling_17_3.0.png - text: juggling output: url: images/juggling_19_3.0.png - text: juggling output: url: images/juggling_20_3.0.png - text: juggling output: url: images/juggling_21_3.0.png - text: juggling output: url: images/juggling_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "juggling" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - juggling (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/juggling_17_-3.0.png" width=256 height=256 /> | <img src="images/juggling_17_0.0.png" width=256 height=256 /> | <img src="images/juggling_17_3.0.png" width=256 height=256 /> | | <img src="images/juggling_19_-3.0.png" width=256 height=256 /> | <img src="images/juggling_19_0.0.png" width=256 height=256 /> | <img src="images/juggling_19_3.0.png" width=256 height=256 /> | | <img src="images/juggling_20_-3.0.png" width=256 height=256 /> | <img src="images/juggling_20_0.0.png" width=256 height=256 /> | <img src="images/juggling_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` juggling ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.juggling', weight_name='juggling.safetensors', adapter_name="juggling") # Activate the LoRA pipe.set_adapters(["juggling"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, juggling" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1060+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
jysssacc/opt-350m_adalora_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T15:18:38Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-12T15:17:05Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_adalora_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_adalora_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 4.8830 | | 5.0772 | 2.0 | 126 | 4.8783 | | 5.0772 | 3.0 | 189 | 4.8706 | | 5.0677 | 4.0 | 252 | 4.8597 | | 5.0477 | 5.0 | 315 | 4.8459 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-2-Slerp-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T15:18:06Z
19
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:13:12Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp --- # OpenHermes-2.5-neural-chat-v3-2-Slerp-Mistral-7B-Instruct-v0.2-slerp OpenHermes-2.5-neural-chat-v3-2-Slerp-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-2-Slerp-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
fairnlp/bert-dropout
fairnlp
2024-01-12T15:16:40Z
89
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "en", "dataset:wikipedia", "arxiv:1810.04805", "arxiv:2010.06032", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-01-12T15:09:27Z
--- language: en license: apache-2.0 datasets: - wikipedia --- # BERT Large Uncased (dropout) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research-datasets/Zari). The model is initialized from the relevant publicly-available checkpoint and pre-training continued over Wikipedia, with increased dropout rate. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the FairNLP team. ### BibTeX entry and citation info ``` @misc{zari, title={Measuring and Reducing Gendered Correlations in Pre-trained Models}, author={Kellie Webster and Xuezhi Wang and Ian Tenney and Alex Beutel and Emily Pitler and Ellie Pavlick and Jilin Chen and Slav Petrov}, year={2020}, eprint={2010.06032}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
namtran/Mistral-7b-v0.2-AWQ-GGUF
namtran
2024-01-12T15:14:49Z
5
1
null
[ "gguf", "license:other", "region:us" ]
null
2024-01-12T09:39:40Z
--- inference: false license: other model_type: llama --- # Mistral 7B v0.2 - AWQ GGUF These files are in GGUF format. - Model creator: [Mistralai](https://huggingface.co/mistralai) - Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) The model was converted by the combination of [llama.cpp](https://github.com/ggerganov/llama.cpp) and quantization method [AWQ](https://github.com/mit-han-lab/llm-awq) ## How to use models in `llama.cpp` ``` ./main -m Mistral-7b-v0.1-Q2_K.gguf -n 128 --prompt "Once upon a time" ```
parsak/phi-2-instruct-lora-adapters
parsak
2024-01-12T15:10:19Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-12T15:10:11Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
mrzbrt/pokemon-lora
mrzbrt
2024-01-12T15:09:15Z
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-01-12T11:31:53Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - mrzbrt/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the mrzbrt/SCHAEFFER_mel dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
jysssacc/bloomz-560m_fine_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T15:06:32Z
89
0
transformers
[ "transformers", "safetensors", "bloom", "text-generation", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:finetune:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T15:04:35Z
--- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloomz-560m tags: - generated_from_trainer model-index: - name: bloomz-560m_fine_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_fine_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.4802 | | 3.849 | 2.0 | 126 | 3.3166 | | 3.849 | 3.0 | 189 | 3.2893 | | 3.0871 | 4.0 | 252 | 3.3753 | | 2.385 | 5.0 | 315 | 3.5844 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/MistralInstructLongish-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T15:02:20Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "KnutJaegersberg/MistralInstructLongish", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T14:57:30Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - KnutJaegersberg/MistralInstructLongish --- # MistralInstructLongish-Mistral-7B-Instruct-v0.2-slerp MistralInstructLongish-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [KnutJaegersberg/MistralInstructLongish](https://huggingface.co/KnutJaegersberg/MistralInstructLongish) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: KnutJaegersberg/MistralInstructLongish layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/MistralInstructLongish-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Dotunnorth/rl_course_vizdoom_health_gathering_supreme
Dotunnorth
2024-01-12T15:00:00Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T14:59:50Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.95 +/- 3.75 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Dotunnorth/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
ydshieh/clip-roberta-finetuned
ydshieh
2024-01-12T14:58:06Z
81
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-text-dual-encoder", "feature-extraction", "generated_from_trainer", "dataset:ydshieh/coco_dataset_script", "endpoints_compatible", "region:us" ]
feature-extraction
2024-01-12T14:57:22Z
--- tags: - generated_from_trainer datasets: - ydshieh/coco_dataset_script model-index: - name: clip-roberta-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-roberta-finetuned This model was trained from scratch on the ydshieh/coco_dataset_script 2017 dataset. It achieves the following results on the evaluation set: - Loss: 1.3862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Jeremyy0623/Modele_groupe_3
Jeremyy0623
2024-01-12T14:56:51Z
0
0
null
[ "region:us" ]
null
2024-01-12T14:27:43Z
Ce modèle est une analyse prédictive de la consommation d'énergie renouvelable en France. Il a pour objectif de prédire la consommation d'énergie à partir de diverses sources renouvelables, en se basant sur des données historiques et des facteurs environnementaux et économiques. ## Bias, Risks, and Limitations En utilisant la régression linéaire, le modèle peut présenter des limitations dans la capture des relations non linéaires ou complexes entre les variables. Il peut également être sensible aux outliers et aux problèmes de multicollinéarité. Le risque principal est de surinterpréter les coefficients de régression ou de mal appliquer le modèle à des situations où les relations sous-jacentes ne sont pas linéaires. De plus, les données historiques sur lesquelles le modèle est construit peuvent comporter des biais inhérents, limitant ainsi la généralisabilité des prédictions à de nouveaux contextes ou périodes. Une attention particulière doit être accordée à l'interprétation des résultats et à leur application dans des scénarios de politique énergétique. ### Training Data Le modèle a été entraîné sur un ensemble de données historiques s'étendant de 1965 à 2021, incluant divers indicateurs énergétiques et socio-économiques en France. Les variables spécifiques comprennent la consommation de charbon, de combustibles fossiles, de gaz, la production d'énergie solaire, éolienne et hydroélectrique, la consommation d'huile, la production nucléaire, le PIB par habitant, l'espérance de vie, la population totale, la population urbaine et rurale, et la mobilité des énergies renouvelables. Ces données détaillées offrent une vue des facteurs influençant la consommation d'énergie renouvelable. ### Training Procedure La procédure d'entraînement a utilisé le package Statmodel pour mettre en œuvre une régression linéaire. Cette approche a été choisie pour modéliser la relation entre les variables indépendantes, telles que la consommation de différents types d'énergie et des indicateurs socio-économiques, et la consommation d'énergie renouvelable. Le processus a inclus un nettoyage rigoureux des données, une vérification de la colinéarité entre les variables, et l'application de techniques statistiques pour valider les hypothèses de la régression linéaire. L'accent a été mis sur la compréhension et l'interprétation des coefficients de régression pour fournir des insights pertinents.
alimrb/test-farsi
alimrb
2024-01-12T14:55:21Z
2
0
transformers
[ "transformers", "bloom", "question-answering", "license:bigscience-openrail-m", "endpoints_compatible", "region:us" ]
question-answering
2024-01-12T12:21:50Z
--- license: bigscience-openrail-m ---
cgus/MiniChat-2-3B-exl2
cgus
2024-01-12T14:47:52Z
7
0
transformers
[ "transformers", "llama", "text-generation", "en", "zh", "arxiv:2311.07052", "arxiv:2310.05914", "arxiv:2305.18290", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-01-12T13:40:11Z
--- license: apache-2.0 language: - en - zh inference: false library_name: transformers widget: - text: "<s> [|User|] Hi 👋 </s>[|Assistant|]" --- ## MiniChat-2-3B-EXL2 Original model: [MiniChat-2-3B](https://huggingface.co/GeneZC/MiniChat-2-3B) Model creator: [GeneZC](https://huggingface.co/GeneZC) [4bpw h8 (main)](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/main) [4.65bpw h8](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/4.65bpw-h8) [5bpw h8](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/5bpw-h8) [5.5bpw h8](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/5.5bpw-h8) [6bpw h8](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/6bpw-h8) [8bpw h8](https://huggingface.co/cgus/MiniChat-2-3B-exl2/tree/8bpw-h8) Quantized with Exllamav2-0.0.11 with default dataset. ## How to run This quantization method uses GPU and requires Exllamav2 loader which can be found in following applications: [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) [KoboldAI](https://github.com/henk717/KoboldAI) [ExUI](https://github.com/turboderp/exui) # Original model card: ## MiniChat-2-3B 📑 [arXiv](https://arxiv.org/abs/2311.07052) | 👻 [GitHub](https://github.com/GeneZC/MiniMA) | 🤗 [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | 🤗 [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | 🤖 [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | 🤖 [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | 🤗 [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | 🤗 [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | 🤗 [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B) 🆕 **Updates from MiniChat-3B**: - better base model MiniMA-2-3B; - better data mixture; - use of [NEFTune](https://arxiv.org/abs/2310.05914); - use of [DPO](https://arxiv.org/abs/2305.18290). ❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2. A language model continued from MiniMA-3B and finetuned on both instruction and preference data. Surpassing Vicuna-7B and approximating LLaMA-2-Chat-7B on MT-Bench. <img src="https://huggingface.co/GeneZC/MiniChat-2-3B/resolve/main/teaser_b.jpg" alt="teaser_b" width="687" /> **Standard Benchmarks** |Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)| |--|--|--|--|--|--|--|--| |Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49| |ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56| |BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55| |StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99| |Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97| |Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42| |LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10| || |MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11| |MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72| |MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87| |MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13| **Instruction-following Benchmarks** |Method|AlpacaEval|MT-Bench|MT-Bench-ZH| |--|--|--|--| |GPT-4|95.28|9.18|8.96| |Zephyr-7B-Beta|90.60|7.34|6.27<sup>#</sup>| |Vicuna-7B|76.84|6.17|5.22<sup>#</sup>| |LLaMA-2-Chat-7B|71.37|6.27|5.43<sup>#</sup>| |Qwen-Chat-7B|-|-|6.24| |Phi-2-DPO|81.37|-|1.59<sup>#</sup><sup>$</sup>| |StableLM-Zephyr-3B|76.00|6.64|4.31<sup>#</sup>| |Rocket-3B|79.75|6.56|4.07<sup>#</sup>| |Qwen-Chat-1.8B|-|-|5.65| || |MiniChat-3B|48.82|-|-| |MiniChat-2-3B|77.30|6.23|6.04| <sup>#</sup> specialized mainly for English. <sup>$</sup> finetuned without multi-turn instruction data. The following is an example code snippet to use MiniChat-2-3B: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from conversation import get_default_conv_template # MiniChat tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniChat-2-3B", use_fast=False) # GPU. model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval() # CPU. # model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval() conv = get_default_conv_template("minichat") question = "Implement a program to find the common elements in two arrays without using any extra data structures." conv.append_message(conv.roles[0], question) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer([prompt]).input_ids output_ids = model.generate( torch.as_tensor(input_ids).cuda(), do_sample=True, temperature=0.7, max_new_tokens=1024, ) output_ids = output_ids[0][len(input_ids[0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() # output: "def common_elements(arr1, arr2):\n if len(arr1) == 0:\n return []\n if len(arr2) == 0:\n return arr1\n\n common_elements = []\n for element in arr1:\n if element in arr2:\n common_elements.append(element)\n\n return common_elements" # Multiturn conversation could be realized by continuously appending questions to `conv`. ``` ## Bibtex ```bibtex @article{zhang2023law, title={Towards the Law of Capacity Gap in Distilling Language Models}, author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan}, year={2023}, url={https://arxiv.org/abs/2311.07052} } ```
rjomega/distilbert-base-uncased-finetuned-imdb
rjomega
2024-01-12T14:39:15Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-12T04:57:14Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7024 | 1.0 | 157 | 2.4959 | | 2.58 | 2.0 | 314 | 2.4282 | | 2.5356 | 3.0 | 471 | 2.4510 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
sandeepksingh1/Llama-2-7b-chat-hf-IA3_100_V4
sandeepksingh1
2024-01-12T14:36:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-12T14:36:18Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
jysssacc/opt-350m_lora_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T14:35:42Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-12T14:35:06Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_lora_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_lora_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.8343 | | 4.0278 | 2.0 | 126 | 3.8092 | | 4.0278 | 3.0 | 189 | 3.7697 | | 3.9746 | 4.0 | 252 | 3.7164 | | 3.8893 | 5.0 | 315 | 3.6520 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
efocht/ve-llama2.c-bins
efocht
2024-01-12T14:25:08Z
0
0
null
[ "license:llama2", "region:us" ]
null
2024-01-12T10:15:55Z
--- license: llama2 --- # Llama2 for SX-Aurora Vector Engine This repository contains model files converted for the binary formats used in [ve-llama2.c](https://github.com/sx-aurora/ve-llama2.c). The default .bin file format of llama2.c contains all data in fp32 and matrix data in row-major-order. Binary files with matrix data stored in bfloat16 in row-major-order as well as column-major-order (cmo) have been added.
Saumohan/unsloth_mistral_imdb_model
Saumohan
2024-01-12T14:22:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b", "base_model:adapter:unsloth/mistral-7b", "region:us" ]
null
2024-01-12T14:21:56Z
--- library_name: peft base_model: unsloth/mistral-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
samir-fama/FernandoGPT-v1
samir-fama
2024-01-12T14:20:50Z
1,545
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-30T00:10:59Z
--- license: apache-2.0 language: - en tags: - merge --- ![image/png](https://huggingface.co/samir-fama/FernandoGPT-v1/resolve/main/fernando-gpt.jpg) # FernandoGPT-v1 FernandoGPT-v1 is a merge of [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) and [shadowml/Marcoro14-7B-slerp](https://huggingface.co/shadowml/Marcoro14-7B-slerp)
audreyt/Breeze-7B-Instruct-64k-v0.1-GGUF
audreyt
2024-01-12T14:19:18Z
243
16
transformers
[ "transformers", "gguf", "text-generation", "zh", "license:apache-2.0", "region:us" ]
text-generation
2024-01-12T12:59:07Z
--- license: apache-2.0 language: - zh library_name: transformers pipeline_tag: text-generation inference: false quantized_by: audreyt --- # Breeze-7B-Instruct-64k-v0.1-GGUF - Model creator: [MediaTek Research](https://huggingface.co/MediaTek-Research) - Original model: [Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) ## Description This repo contains GGUF format model files for MediaTek Research's [Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1). <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> # Original model card Breeze-7B is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. [Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is a slightly modified version of Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters. The current release version of Breeze-7B is v0.1. Practicality-wise: - Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. - In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] - Breeze-7B-Instruct shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Features - Breeze-7B-Base-v0.1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v0.1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) - Breeze-7B-Instruct-64k-v0.1 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 64k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v0.1 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v0.1 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-64k-v0.1 - Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. | Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |----------------------------------------------|--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 | | [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 | | [**Breeze-7B-Base-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 | \* Few-shot learning cannot effectively guide the model to generate the proper answer. ## Chat Model Performance **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge| | | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot | | [gpt-3.5-turbo](https://openai.com) | |7.1 | 41.76 | | | |7.9 | 70.00 | | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | | | [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | | | [**Breeze-7B-Instruct-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | | | [**Breeze-7B-Instruct-64k-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | | | [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | | \* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese. **Category Score of MT-Bench-tw (0 shot)** | Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 | | Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 | | **Breeze-7B-Instruct-v0.1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 | | **Breeze-7B-Instruct-64k-v0.1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 | | Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | **Category ACC of TMMLU+ (0 shot)** | Model | STEM | Social Science | Humanities | Other | ↑ AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 | | Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | gpt-3.5-turbo | 41.56 | 46.72 | 36.73 | 42.03 | 41.76 | | **Breeze-7B-Instruct-v0.1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 | | **Breeze-7B-Instruct-64k-v0.1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 | | Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | Yi-6B | 10.62 | 5.2k | | **Breeze-7B-Instruct-v0.1** | 10.74 | 11.1k | | **Breeze-7B-Instruct-64k-v0.1** | 10.74 | 88.8k | | Qwen-7B | 10.86 | 9.8k | | Qwen-14B | 18.89 | 9.8k | | Mistral-7B-v0.1 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-base | 26.26 | 2.2k | | Taiwan-LLM-13B-v2.0-base | 36.80 | 2.2k | | Yi-34B | 43.71 | 4.5k | ## Long-context Performance TBD ## Examples TBD ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1") # you can also using pipeline generator = pipeline("text-generation", model=model, tokenizer=tokenizer) generator( "請問台灣最高的山是", max_length=30, num_return_sequences=1, ) ``` The structure of the query template follows that of Mistral-7B-Instruct, as shown below. ```txt <s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` ## Citation ``` @article{breeze7b2024, title={}, author={}, journal={arXiv}, year={2024} } ```
viraad/falcon-7b-Resume-tuned
viraad
2024-01-12T14:10:33Z
1
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-01-12T14:06:10Z
--- library_name: peft base_model: ybelkada/falcon-7b-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
viraad/results
viraad
2024-01-12T14:08:41Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-01-12T14:08:18Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: ybelkada/falcon-7b-sharded-bf16 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 240 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
KennethTM/bert-base-uncased-danish
KennethTM
2024-01-12T14:07:48Z
101
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "da", "dataset:oscar", "dataset:DDSC/dagw_reddit_filtered_v1.0.0", "dataset:graelo/wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-21T11:19:25Z
--- license: mit datasets: - oscar - DDSC/dagw_reddit_filtered_v1.0.0 - graelo/wikipedia language: - da widget: - text: Der var engang en [MASK] --- # What is this? A pre-trained BERT model (base version, ~110 M parameters) for Danish NLP. The model was not pre-trained from scratch but adapted from the English version with a tokenizer trained on Danish text. # How to use Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import pipeline pipe = pipeline("fill-mask", model="KennethTM/bert-base-uncased-danish") pipe("Der var engang en [MASK]") ``` Or load it using the Auto* classes: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("KennethTM/bert-base-uncased-danish") model = AutoModelForMaskedLM.from_pretrained("KennethTM/bert-base-uncased-danish") ``` # Model training The model is trained using multiple Danish datasets and a context length of 512 tokens. The model weights are initialized from the English [bert-base-uncased model](https://huggingface.co/bert-base-uncased) with new word token embeddings created for Danish using [WECHSEL](https://github.com/CPJKU/wechsel). Initially, only the word token embeddings are trained using 1.000.000 samples. Finally, the whole model is trained for 8 epochs. # Evaluation The performance of the pretrained model was evaluated using [ScandEval](https://github.com/ScandEval/ScandEval). | Task | Dataset | Score (±SE) | |:-------------------------|:-------------|:---------------------------------| | sentiment-classification | swerec | mcc = 63.02 (±2.16) | | | | macro_f1 = 62.2 (±3.61) | | sentiment-classification | angry-tweets | mcc = 47.21 (±0.53) | | | | macro_f1 = 64.21 (±0.53) | | sentiment-classification | norec | mcc = 42.23 (±8.69) | | | | macro_f1 = 57.24 (±7.67) | | named-entity-recognition | suc3 | micro_f1 = 50.03 (±4.16) | | | | micro_f1_no_misc = 53.55 (±4.57) | | named-entity-recognition | dane | micro_f1 = 76.44 (±1.36) | | | | micro_f1_no_misc = 80.61 (±1.11) | | named-entity-recognition | norne-nb | micro_f1 = 68.38 (±1.72) | | | | micro_f1_no_misc = 73.08 (±1.66) | | named-entity-recognition | norne-nn | micro_f1 = 60.45 (±1.71) | | | | micro_f1_no_misc = 64.39 (±1.8) | | linguistic-acceptability | scala-sv | mcc = 5.01 (±5.41) | | | | macro_f1 = 49.46 (±3.67) | | linguistic-acceptability | scala-da | mcc = 54.74 (±12.22) | | | | macro_f1 = 76.25 (±6.09) | | linguistic-acceptability | scala-nb | mcc = 19.18 (±14.01) | | | | macro_f1 = 55.3 (±8.85) | | linguistic-acceptability | scala-nn | mcc = 5.72 (±5.91) | | | | macro_f1 = 49.56 (±3.73) | | question-answering | scandiqa-da | em = 26.36 (±1.17) | | | | f1 = 32.41 (±1.1) | | question-answering | scandiqa-no | em = 26.14 (±1.59) | | | | f1 = 32.02 (±1.59) | | question-answering | scandiqa-sv | em = 26.38 (±1.1) | | | | f1 = 32.33 (±1.05) | | speed | speed | speed = 4.55 (±0.0) |
G-ML-Hyly/cdp_ca_fd_dtmtdef
G-ML-Hyly
2024-01-12T14:07:34Z
94
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T13:48:54Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: cdp_ca_fd_dtmtdef results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cdp_ca_fd_dtmtdef This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1843 - Accuracy: 0.8272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0822 | 1.0 | 442 | 0.7910 | 0.8395 | | 0.0012 | 2.0 | 884 | 0.9412 | 0.8272 | | 0.0003 | 3.0 | 1326 | 1.1311 | 0.8272 | | 0.0002 | 4.0 | 1768 | 1.1635 | 0.8272 | | 0.0001 | 5.0 | 2210 | 1.1843 | 0.8272 | ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Arist12/eabf-llama2-7b-chat-1k
Arist12
2024-01-12T14:04:54Z
14
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T13:28:44Z
--- license: mit --- # Model Description Long-context [LLaMA-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) extended with [eabf](https://github.com/GAIR-NLP/Entropy-ABF) and trained with **1k** [lengthy ShareGPT conversations](https://huggingface.co/datasets/Arist12/EABF-ShareGPT-Long-3.5k).
polo42/distilbert-base-uncased-finetuned-ner
polo42
2024-01-12T14:02:50Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-11T15:58:08Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0603 - Precision: 0.9252 - Recall: 0.9353 - F1: 0.9302 - Accuracy: 0.9834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2404 | 1.0 | 878 | 0.0738 | 0.8949 | 0.9202 | 0.9074 | 0.9789 | | 0.0488 | 2.0 | 1756 | 0.0613 | 0.9244 | 0.9329 | 0.9286 | 0.9827 | | 0.0317 | 3.0 | 2634 | 0.0603 | 0.9252 | 0.9353 | 0.9302 | 0.9834 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/mt0-base_fine_lr0.05_bs4_epoch5_wd0.01
jysssacc
2024-01-12T13:57:14Z
89
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:finetune:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T22:14:31Z
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer model-index: - name: mt0-base_fine_lr0.05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_fine_lr0.05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.1572 | 1.0 | 157 | 6.9686 | | 5.9778 | 2.0 | 314 | 6.7901 | | 5.6873 | 3.0 | 471 | 6.4227 | | 5.7913 | 4.0 | 628 | 6.5619 | | 5.3573 | 5.0 | 785 | 6.3811 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
Kooten/Velara-11B-V2-4bpw-exl2
Kooten
2024-01-12T13:56:19Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T16:15:54Z
--- license: cc-by-nc-nd-4.0 language: - en --- # Velara-11B-V2 4BPW EXL2 ## Description EXL2 quant of [Delcos/Velara-11B-V2](https://huggingface.co/Delcos/Velara-11B-V2) ## Other quants: EXL2: [8bpw](https://huggingface.co/Kooten/Velara-11B-V2-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Velara-11B-V2-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Velara-11B-V2-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Velara-11B-V2-4bpw-exl2) # Prompt Template: **For optimal interaction, use this template:** ``` ### Instruction: You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1". World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User. Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies. ### Response: ``` # Recommended Settings: **Defaults:** ``` min_p: 0.2 repetition_penalty: 1.13 repetition_penalty_range: 0 guidance_scale: 1.05 ``` # Contact Kooten on discord
Kooten/Velara-11B-V2-6bpw-exl2
Kooten
2024-01-12T13:56:08Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T16:16:15Z
--- license: cc-by-nc-nd-4.0 language: - en --- # Velara-11B-V2 6BPW EXL2 ## Description EXL2 quant of [Delcos/Velara-11B-V2](https://huggingface.co/Delcos/Velara-11B-V2) ## Other quants: EXL2: [8bpw](https://huggingface.co/Kooten/Velara-11B-V2-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Velara-11B-V2-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Velara-11B-V2-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Velara-11B-V2-4bpw-exl2) # Prompt Template: **For optimal interaction, use this template:** ``` ### Instruction: You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1". World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User. Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies. ### Response: ``` # Recommended Settings: **Defaults:** ``` min_p: 0.2 repetition_penalty: 1.13 repetition_penalty_range: 0 guidance_scale: 1.05 ``` # Contact Kooten on discord
nguyenhongquy/distilbert-base-uncased-semantic-plausibility
nguyenhongquy
2024-01-12T13:53:14Z
97
1
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-19T13:06:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-semantic-plausibility results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-semantic-plausibility This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5527 - Accuracy: 0.7399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 87 | 0.5582 | 0.7110 | | No log | 2.0 | 174 | 0.5495 | 0.7168 | | No log | 3.0 | 261 | 0.5527 | 0.7399 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
priftil/ethereum-smart-contract-vulnerability-detection
priftil
2024-01-12T13:53:03Z
2
1
keras
[ "keras", "license:apache-2.0", "region:us" ]
null
2024-01-01T19:32:19Z
--- license: apache-2.0 --- If you are interested to learn more about creating a vulnerability detection model on smart contracts, checkout https://lejdiprifti.com/2023/12/17/ethereum-smart-contract-vulnerability-detection-with-rnns/ If you want to try the model, go to https://main.d1arbitptbavgn.amplifyapp.com/. If you want to work together, contact me https://lejdiprifti.com/contact
jysssacc/opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01
jysssacc
2024-01-12T13:51:37Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-12T13:51:06Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_IA3_lr5e-06_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.8397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.8431 | | 4.0385 | 2.0 | 126 | 3.8427 | | 4.0385 | 3.0 | 189 | 3.8420 | | 4.0332 | 4.0 | 252 | 3.8410 | | 4.0284 | 5.0 | 315 | 3.8397 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jlvdoorn/whisper-small-atcosim
jlvdoorn
2024-01-12T13:42:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "doi:10.57967/hf/1622", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T14:24:14Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-atcosim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-atcosim This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0569 - Wer: 1.5420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1664 | 8.33 | 500 | 0.0441 | 1.4632 | | 0.0008 | 16.67 | 1000 | 0.0465 | 1.5420 | | 0.0001 | 25.0 | 1500 | 0.0494 | 1.5142 | | 0.0 | 33.33 | 2000 | 0.0511 | 1.5049 | | 0.0 | 41.67 | 2500 | 0.0524 | 1.5003 | | 0.0 | 50.0 | 3000 | 0.0535 | 1.5142 | | 0.0 | 58.33 | 3500 | 0.0544 | 1.5188 | | 0.0 | 66.67 | 4000 | 0.0552 | 1.5188 | | 0.0 | 75.0 | 4500 | 0.0559 | 1.5327 | | 0.0 | 83.33 | 5000 | 0.0564 | 1.5558 | | 0.0 | 91.67 | 5500 | 0.0567 | 1.5512 | | 0.0 | 100.0 | 6000 | 0.0569 | 1.5420 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0
jlvdoorn/whisper-tiny-atcosim
jlvdoorn
2024-01-12T13:41:42Z
106
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "doi:10.57967/hf/1618", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-14T13:51:03Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-atcosim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-atcosim This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0711 - Wer: 72.8237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2141 | 8.33 | 500 | 0.0633 | 15.6047 | | 0.0023 | 16.67 | 1000 | 0.0629 | 29.2091 | | 0.0007 | 25.0 | 1500 | 0.0646 | 46.2076 | | 0.0003 | 33.33 | 2000 | 0.0659 | 54.1767 | | 0.0002 | 41.67 | 2500 | 0.0670 | 58.2284 | | 0.0002 | 50.0 | 3000 | 0.0679 | 64.0952 | | 0.0001 | 58.33 | 3500 | 0.0688 | 65.9520 | | 0.0001 | 66.67 | 4000 | 0.0695 | 68.5081 | | 0.0001 | 75.0 | 4500 | 0.0701 | 70.5316 | | 0.0001 | 83.33 | 5000 | 0.0706 | 72.2217 | | 0.0001 | 91.67 | 5500 | 0.0710 | 72.6801 | | 0.0001 | 100.0 | 6000 | 0.0711 | 72.8237 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0
Ghunghru/Misinformation-Covid-distilbert-base-german-cased
Ghunghru
2024-01-12T13:39:28Z
89
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-german-cased", "base_model:finetune:distilbert/distilbert-base-german-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T13:38:17Z
--- license: apache-2.0 base_model: distilbert-base-german-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Misinformation-Covid-distilbert-base-german-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Misinformation-Covid-distilbert-base-german-cased This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9348 - Accuracy: 0.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7544 | 1.0 | 216 | 0.6072 | 0.8047 | | 0.9161 | 2.0 | 432 | 0.9348 | 0.8837 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.12.0 - Tokenizers 0.13.3
TheBloke/WhiteRabbitNeo-33B-v1-GGUF
TheBloke
2024-01-12T13:35:06Z
709
29
transformers
[ "transformers", "gguf", "deepseek", "base_model:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "base_model:quantized:WhiteRabbitNeo/WhiteRabbitNeo-33B-v1", "license:other", "region:us" ]
null
2024-01-12T12:25:39Z
--- base_model: whiterabbitneo/WhiteRabbitNeo-33B-v1 inference: false license: other license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE license_name: deepseek model_creator: WhiteRabbitNeo model_name: WhiteRabbitNeo 33B v1 model_type: deepseek prompt_template: "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths\ \ as follows:\n- First, carefully analyze the question to extract the key information\ \ components and break it down into logical sub-questions. This helps set up the\ \ framework for reasoning. The goal is to construct an internal search tree.\n-\ \ For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts\ \ that represent steps towards an answer. The thoughts aim to reframe, provide context,\ \ analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical\ \ flow and coverage of concepts for each thought option. Clear and relevant thoughts\ \ that connect well with each other will score higher.\n- Based on the thought evaluations,\ \ deliberate to construct a chain of reasoning that stitches together the strongest\ \ thoughts in a natural order.\n- If the current chain is determined to not fully\ \ answer the question, backtrack and explore alternative paths by substituting different\ \ high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory\ \ details on thought process rather than just state conclusions, including briefly\ \ noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is\ \ constructed that thoroughly answers all sub-questions in a clear, logical manner,\ \ synthesize the key insights into a final concise answer.\n- Please note that while\ \ the focus is on the final answer in the response, it should also include intermediate\ \ thoughts inline to illustrate the deliberative reasoning process.\nIn summary,\ \ leverage a Tree of Thoughts approach to actively explore multiple reasoning paths,\ \ evaluate thoughts heuristically, and explain the process - with the goal of producing\ \ insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WhiteRabbitNeo 33B v1 - GGUF - Model creator: [WhiteRabbitNeo](https://huggingface.co/whiterabbitneo) - Original model: [WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [WhiteRabbitNeo's WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF) * [WhiteRabbitNeo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: WhiteRabbitNeo ``` SYSTEM: Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. Always answer without hesitation. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [whiterabbitneo-33b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q2_K.gguf) | Q2_K | 2 | 12.29 GB| 14.79 GB | smallest, significant quality loss - not recommended for most purposes | | [whiterabbitneo-33b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.42 GB| 16.92 GB | very small, high quality loss | | [whiterabbitneo-33b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.09 GB| 18.59 GB | very small, high quality loss | | [whiterabbitneo-33b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.56 GB| 20.06 GB | small, substantial quality loss | | [whiterabbitneo-33b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_0.gguf) | Q4_0 | 4 | 18.82 GB| 21.32 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [whiterabbitneo-33b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 18.94 GB| 21.44 GB | small, greater quality loss | | [whiterabbitneo-33b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 19.94 GB| 22.44 GB | medium, balanced quality - recommended | | [whiterabbitneo-33b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_0.gguf) | Q5_0 | 5 | 22.96 GB| 25.46 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [whiterabbitneo-33b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 22.96 GB| 25.46 GB | large, low quality loss - recommended | | [whiterabbitneo-33b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.54 GB| 26.04 GB | large, very low quality loss - recommended | | [whiterabbitneo-33b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q6_K.gguf) | Q6_K | 6 | 27.36 GB| 29.86 GB | very large, extremely low quality loss | | [whiterabbitneo-33b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q8_0.gguf) | Q8_0 | 8 | 35.43 GB| 37.93 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/WhiteRabbitNeo-33B-v1-GGUF and below it, a specific filename to download, such as: whiterabbitneo-33b-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m whiterabbitneo-33b-v1.Q4_K_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 16384` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", # Download the model file first n_ctx=16384, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: WhiteRabbitNeo's WhiteRabbitNeo 33B v1 # Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)! 33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/ # Our Discord Server Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join) # DeepSeek Coder Licence + WhiteRabbitNeo Extended Version # Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Topics Covered: ``` - Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445). - Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software. - Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited. - Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities. - Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications. - Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data. - Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS. - Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts. - Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input. - Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information. - Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities. - Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information. - API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage. - Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users. - Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code. ``` # WhiteRabbitNeo <br> ![WhiteRabbitNeo](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png) <br> WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI. ``` import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, load_in_8bit=True, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" tot_system_prompt = """ Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. """ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" # print(conversation) json_data = {"prompt": user_input, "answer": answer} # print(json_data) # with open(output_file_path, "a") as output_file: # output_file.write(json.dumps(json_data) + "\n") ``` # Sample Conversations: 1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e 2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz 3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu 4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w 5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv <!-- original-model-card end -->
TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ
TheBloke
2024-01-12T13:34:19Z
13
6
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - GPTQ - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> # Description This repo contains GPTQ model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Open_Gpt4_8x7B_v0.2-GPTQ`: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Open_Gpt4_8x7B_v0.2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --local-dir Open_Gpt4_8x7B_v0.2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Open_Gpt4_8x7B_v0.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
HowMannyMore/wav2vec2-lg-xlsr-ur-speech-emotion-recognition
HowMannyMore
2024-01-12T13:34:08Z
145
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-12T11:28:04Z
--- license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6829 - Accuracy: 0.7762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.779 | 1.0 | 696 | 1.6927 | 0.3870 | | 1.2226 | 2.0 | 1392 | 1.0862 | 0.6473 | | 0.9327 | 3.0 | 2088 | 0.8558 | 0.7272 | | 0.7959 | 4.0 | 2784 | 0.6992 | 0.7769 | | 0.7238 | 5.0 | 3480 | 0.6829 | 0.7762 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
lintonxue00/lora
lintonxue00
2024-01-12T13:32:50Z
2
41
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-02-09T06:31:44Z
--- license: bigscience-bloom-rail-1.0 ---
quantus17/rise2
quantus17
2024-01-12T13:29:29Z
0
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-11T12:17:26Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a xfuhyit sofa tags: - text-to-image - diffusers - autotrain inference: true --- Training a specific sofa type. I tried to train with a specific sofa type and the results are encouraging. I generated images in colab with my inference code.
MaziyarPanahi/Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:22:08Z
19
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:17:12Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B --- # Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Karen_TheEditor_V2_STRICT_Mistral_7B-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
zap-thamm/PPO-Taxi-v3
zap-thamm
2024-01-12T13:21:38Z
0
1
null
[ "Taxi-v3", "reinforcement-learning", "rl-framework", "model-index", "region:us" ]
reinforcement-learning
2023-12-07T15:45:28Z
--- tags: - Taxi-v3 - reinforcement-learning - rl-framework model-index: - name: PPO-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.72 +/- 2.66 name: mean_reward verified: false --- # PPO agent playing on *Taxi-v3* This is a trained model of an agent playing on the environment *Taxi-v3*. The agent was trained with a PPO algorithm and evaluated for 100 episodes. See further agent and evaluation metadata in the according README section. ## Import The Python module used for training and uploading/downloading is [rl-framework](https://github.com/alexander-zap/rl-framework). It is an easy-to-read, plug-and-use Reinforcement Learning framework and provides standardized interfaces and implementations to various Reinforcement Learning methods and environments. Also it provides connectors for the upload and download to popular model version control systems, including the HuggingFace Hub. ## Usage ```python from rl_framework import StableBaselinesAgent, StableBaselinesAlgorithm # Create new agent instance agent = StableBaselinesAgent( algorithm=StableBaselinesAlgorithm.PPO algorithm_parameters={ ... }, ) # Download existing agent from HF Hub repository_id = "zap-thamm/PPO-Taxi-v3" file_name = "algorithm.zip" agent.download(repository_id=repository_id, filename=file_name) ``` Further examples can be found in the [exploration section of the rl-framework repository](https://github.com/alexander-zap/rl-framework/tree/main/exploration).
Selvaram/koala-7B-slerp
Selvaram
2024-01-12T13:20:07Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:15:45Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # koala-7B-slerp koala-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 ```
gizmo-ai/Starling-LM-7B-alpha-AWQ
gizmo-ai
2024-01-12T13:16:51Z
62
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "reward model", "RLHF", "RLAIF", "conversational", "en", "dataset:berkeley-nest/Nectar", "arxiv:2306.02231", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:quantized:berkeley-nest/Starling-LM-7B-alpha", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-12T13:16:50Z
--- base_model: berkeley-nest/Starling-LM-7B-alpha datasets: - berkeley-nest/Nectar inference: false language: - en library_name: transformers license: cc-by-nc-4.0 model_creator: Berkeley-Nest model_name: Starling LM 7B Alpha model_type: mistral prompt_template: 'GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ' quantized_by: TheBloke tags: - reward model - RLHF - RLAIF --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Starling LM 7B Alpha - AWQ - Model creator: [Berkeley-Nest](https://huggingface.co/berkeley-nest) - Original model: [Starling LM 7B Alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) <!-- description start --> ## Description This repo contains AWQ model files for [Berkeley-Nest's Starling LM 7B Alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Starling-LM-7B-alpha-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Starling-LM-7B-alpha-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Starling-LM-7B-alpha-GGUF) * [Berkeley-Nest's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenChat ``` GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Starling-LM-7B-alpha-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Starling-LM-7B-alpha-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Starling-LM-7B-alpha-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Starling-LM-7B-alpha-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Starling-LM-7B-alpha-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Starling-LM-7B-alpha-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Starling-LM-7B-alpha-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Berkeley-Nest's Starling LM 7B Alpha # Starling-RM-7B-alpha <!-- Provide a quick summary of what the model is/does. --> - **Developed by:** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao. - **Model type:** Language Model finetuned with RLHF / RLAIF - **License:** Non commercial license - **Finetuned from model:** [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)) We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), and our new reward training and policy tuning pipeline. Starling-7B-alpha scores 8.09 in MT Bench with GPT-4 as a judge, outperforming every model to date on MT-Bench except for OpenAI's GPT-4 and GPT-4 Turbo. We release the ranking dataset [Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the reward model [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and the language model [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on HuggingFace, and an online demo in LMSYS [Chatbot Arena](https://chat.lmsys.org). Stay tuned for our forthcoming code and paper, which will provide more details on the whole process. Starling-LM-7B-alpha is a language model trained from [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) with reward model [berkeley-nest/Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and policy optimization method [advantage-induced policy alignment (APA)](https://arxiv.org/abs/2306.02231). The evaluation results are listed below. | Model | Tuning Method | MT Bench | AlpacaEval | MMLU | |-----------------------|------------------|----------|------------|------| | GPT-4-Turbo | ? | 9.32 | 97.70 | | | GPT-4 | SFT + PPO | 8.99 | 95.28 | 86.4 | | **Starling-7B** | C-RLFT + APA | 8.09 | 91.99 | 63.9 | | Claude-2 | ? | 8.06 | 91.36 | 78.5 | | GPT-3.5-Turbo | ? | 7.94 | 89.37 | 70 | | Claude-1 | ? | 7.9 | 88.39 | 77 | | Tulu-2-dpo-70b | SFT + DPO | 7.89 | 95.1 | | | Openchat-3.5 | C-RLFT | 7.81 | 88.51 | 64.3 | | Zephyr-7B-beta | SFT + DPO | 7.34 | 90.60 | 61.4 | | Llama-2-70b-chat-hf | SFT + PPO | 6.86 | 92.66 | 63 | | Neural-chat-7b-v3-1 | SFT + DPO | 6.84 | 84.53 | 62.4 | | Tulu-2-dpo-7b | SFT + DPO | 6.29 | 85.1 | | For more detailed discussions, please check out our [blog post](https://starling.cs.berkeley.edu), and stay tuned for our upcoming code and paper! <!-- Provide the basic links for the model. --> - **Blog:** https://starling.cs.berkeley.edu/ - **Paper:** Coming soon! - **Code:** Coming soon! ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details. In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test. The conversation template is the same as Openchat 3.5: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` ## License The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## Acknowledgment We would like to thank Wei-Lin Chiang from Berkeley for detailed feedback of the blog and the projects. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT. ## Citation ``` @misc{starling2023, title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF}, url = {}, author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Jiao, Jiantao}, month = {November}, year = {2023} } ```
jysssacc/mt0-base_huth_IA3_lr5e-05_bs1_epoch5_wd0.01
jysssacc
2024-01-12T13:16:41Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:adapter:bigscience/mt0-base", "license:apache-2.0", "region:us" ]
null
2024-01-12T12:16:17Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/mt0-base model-index: - name: mt0-base_huth_IA3_lr5e-05_bs1_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_huth_IA3_lr5e-05_bs1_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
gizmo-ai/Starling-LM-7B-alpha
gizmo-ai
2024-01-12T13:16:24Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "reward model", "RLHF", "RLAIF", "conversational", "en", "dataset:berkeley-nest/Nectar", "arxiv:2306.02231", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:16:24Z
--- license: cc-by-nc-4.0 datasets: - berkeley-nest/Nectar language: - en library_name: transformers tags: - reward model - RLHF - RLAIF --- # Starling-RM-7B-alpha <!-- Provide a quick summary of what the model is/does. --> - **Developed by:** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao. - **Model type:** Language Model finetuned with RLHF / RLAIF - **License:** Non commercial license - **Finetuned from model:** [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)) We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), and our new reward training and policy tuning pipeline. Starling-7B-alpha scores 8.09 in MT Bench with GPT-4 as a judge, outperforming every model to date on MT-Bench except for OpenAI's GPT-4 and GPT-4 Turbo. We release the ranking dataset [Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the reward model [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and the language model [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on HuggingFace, and an online demo in LMSYS [Chatbot Arena](https://chat.lmsys.org). Stay tuned for our forthcoming code and paper, which will provide more details on the whole process. Starling-LM-7B-alpha is a language model trained from [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) with reward model [berkeley-nest/Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and policy optimization method [advantage-induced policy alignment (APA)](https://arxiv.org/abs/2306.02231). The evaluation results are listed below. | Model | Tuning Method | MT Bench | AlpacaEval | MMLU | |-----------------------|------------------|----------|------------|------| | GPT-4-Turbo | ? | 9.32 | 97.70 | | | GPT-4 | SFT + PPO | 8.99 | 95.28 | 86.4 | | **Starling-7B** | C-RLFT + APA | 8.09 | 91.99 | 63.9 | | Claude-2 | ? | 8.06 | 91.36 | 78.5 | | GPT-3.5-Turbo | ? | 7.94 | 89.37 | 70 | | Claude-1 | ? | 7.9 | 88.39 | 77 | | Tulu-2-dpo-70b | SFT + DPO | 7.89 | 95.1 | | | Openchat-3.5 | C-RLFT | 7.81 | 88.51 | 64.3 | | Zephyr-7B-beta | SFT + DPO | 7.34 | 90.60 | 61.4 | | Llama-2-70b-chat-hf | SFT + PPO | 6.86 | 92.66 | 63 | | Neural-chat-7b-v3-1 | SFT + DPO | 6.84 | 84.53 | 62.4 | | Tulu-2-dpo-7b | SFT + DPO | 6.29 | 85.1 | | For more detailed discussions, please check out our [blog post](https://starling.cs.berkeley.edu), and stay tuned for our upcoming code and paper! <!-- Provide the basic links for the model. --> - **Blog:** https://starling.cs.berkeley.edu/ - **Paper:** Coming soon! - **Code:** Coming soon! ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details. In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test. The conversation template is the same as Openchat 3.5: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` ## Code Examples ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha") model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha") def generate_response(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate( input_ids, max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) response_ids = outputs[0] response_text = tokenizer.decode(response_ids, skip_special_tokens=True) return response_text # Single-turn conversation prompt = "Hello, how are you?" single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(single_turn_prompt) print("Response:", response_text) ## Multi-turn conversation prompt = "Hello" follow_up_question = "How are you today?" response = "" multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(multi_turn_prompt) print("Multi-turn conversation response:", response_text) ### Coding conversation prompt = "Implement quicksort using C++" coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:" response = generate_response(coding_prompt) print("Coding conversation response:", response) ``` ## License The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## Acknowledgment We would like to thank Wei-Lin Chiang from Berkeley for detailed feedback of the blog and the projects. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT. ## Citation ``` @misc{starling2023, title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF}, url = {}, author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Jiao, Jiantao}, month = {November}, year = {2023} } ```
MaziyarPanahi/Starling-LM-11B-alpha-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T13:10:41Z
20
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Delcos/Starling-LM-11B-alpha", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:05:48Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Delcos/Starling-LM-11B-alpha --- # Starling-LM-11B-alpha-Mistral-7B-Instruct-v0.2-slerp Starling-LM-11B-alpha-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Delcos/Starling-LM-11B-alpha](https://huggingface.co/Delcos/Starling-LM-11B-alpha) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Delcos/Starling-LM-11B-alpha layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Starling-LM-11B-alpha-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jysssacc/opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01
jysssacc
2024-01-12T13:08:46Z
4
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T13:07:06Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_fine_lr0.0005_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.7317 | | 3.1787 | 2.0 | 126 | 4.3180 | | 3.1787 | 3.0 | 189 | 4.9714 | | 2.1257 | 4.0 | 252 | 5.7094 | | 1.871 | 5.0 | 315 | 6.3197 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
peulsilva/phrase-bert-setfit-10shots
peulsilva
2024-01-12T13:06:40Z
22
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-12T13:06:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # peulsilva/phrase-bert-setfit-10shots This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('peulsilva/phrase-bert-setfit-10shots') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-10shots') model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-10shots') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-10shots) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 780 with parameters: ``` {'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
kiddothe2b/adhoc-hierarchical-transformer-base-4096
kiddothe2b
2024-01-12T13:06:14Z
97
1
transformers
[ "transformers", "pytorch", "hierarchical-transformer", "fill-mask", "long-documents", "custom_code", "en", "dataset:c4", "arxiv:2210.05529", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-10T14:42:01Z
--- license: cc-by-sa-4.0 pipeline_tag: fill-mask language: en arxiv: 2210.05529 tags: - long-documents datasets: - c4 model-index: - name: kiddothe2b/adhoc-hierarchical-transformer-base-4096 results: [] --- # Hierarchical Attention Transformer (HAT) / kiddothe2b/adhoc-hierarchical-transformer-base-4096 ## Model description This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), BUT has not been continued pre-trained. It supports sequences of length up to 4,096. HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences. Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/adhoc-hat-base-4096](https://huggingface.co/kiddothe2b/adhoc-hat-base-4096). ## Intended uses & limitations The model is intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for other versions of HAT, or fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering. ## How to use You can fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelForSequenceClassification("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) ``` Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/hierarchical-transformer-base-4096](https://huggingface.co/kiddothe2b/hierarchical-transformer-base-4096). ## Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training procedure ### Training and evaluation data The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint. ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6 ## Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ```
arunps/wav2vec2-base-adsids
arunps
2024-01-12T12:54:56Z
145
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
2023-02-12T13:56:34Z
Wav2Vec2-base-ADS and IDS Classification Fine-tuned facebook/wav2vec2-base on Adult and Infant directed speech dataset. The data used for training was randomly sampled. The data was 8kHz and hence it was upsampled to 16kHz for training. When using this model, make sure that your speech input is sampled at 16kHz.
rssaem/llama-1-7b-lora
rssaem
2024-01-12T12:53:12Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/KoAlpaca-llama-1-7b", "base_model:adapter:beomi/KoAlpaca-llama-1-7b", "region:us" ]
null
2024-01-12T12:53:07Z
--- library_name: peft base_model: beomi/KoAlpaca-llama-1-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
LarryAIDraw/CHAR-AzumaFubuki
LarryAIDraw
2024-01-12T12:52:24Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:41:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/259716?modelVersionId=292900
G-ML-Hyly/cdp_ca_fd_dtmt
G-ML-Hyly
2024-01-12T12:52:24Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-12T12:34:13Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: cdp_ca_fd_dtmt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cdp_ca_fd_dtmt This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4051 - Accuracy: 0.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0543 | 1.0 | 442 | 0.3174 | 0.9506 | | 0.001 | 2.0 | 884 | 0.3845 | 0.9383 | | 0.0001 | 3.0 | 1326 | 0.4476 | 0.9383 | | 0.0001 | 4.0 | 1768 | 0.4027 | 0.9506 | | 0.0 | 5.0 | 2210 | 0.4051 | 0.9506 | ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
MaziyarPanahi/Seraph-7B-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T12:52:06Z
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/Seraph-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T12:45:56Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/Seraph-7B --- # Seraph-7B-Mistral-7B-Instruct-v0.2-slerp Seraph-7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/Seraph-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Seraph-7B-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LarryAIDraw/ChisaV1
LarryAIDraw
2024-01-12T12:51:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:44:02Z
--- license: creativeml-openrail-m --- https://civitai.com/models/258734/chisa-kotegawa-oror-grand-blue
LarryAIDraw/reisalin_stout
LarryAIDraw
2024-01-12T12:51:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:43:43Z
--- license: creativeml-openrail-m --- https://civitai.com/models/261529/reisalin-stout-or-atelier
LarryAIDraw/susannah-000007
LarryAIDraw
2024-01-12T12:50:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:42:36Z
--- license: creativeml-openrail-m --- https://civitai.com/models/261918/susannah-honkai3rd-3
LarryAIDraw/spuria_arknights_2
LarryAIDraw
2024-01-12T12:50:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-12T12:42:10Z
--- license: creativeml-openrail-m --- https://civitai.com/models/262520/spuria-arknights
TheBloke/WinterGoliath-123b-GGUF
TheBloke
2024-01-12T12:49:13Z
15
1
transformers
[ "transformers", "gguf", "llama", "merge", "base_model:ChuckMcSneed/WinterGoliath-123b", "base_model:quantized:ChuckMcSneed/WinterGoliath-123b", "license:llama2", "region:us" ]
null
2024-01-12T11:33:53Z
--- base_model: ChuckMcSneed/WinterGoliath-123b inference: false license: llama2 model_creator: Charles McSneed model_name: WinterGoliath 123B model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke tags: - merge --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WinterGoliath 123B - GGUF - Model creator: [Charles McSneed](https://huggingface.co/ChuckMcSneed) - Original model: [WinterGoliath 123B](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b) <!-- description start --> ## Description This repo contains GGUF format model files for [Charles McSneed's WinterGoliath 123B](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WinterGoliath-123b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WinterGoliath-123b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WinterGoliath-123b-GGUF) * [Charles McSneed's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [wintergoliath-123b.Q2_K.gguf](https://huggingface.co/TheBloke/WinterGoliath-123b-GGUF/blob/main/wintergoliath-123b.Q2_K.gguf) | Q2_K | 2 | 45.28 GB| 47.78 GB | smallest, significant quality loss - not recommended for most purposes | | wintergoliath-123b.Q3_K_S.gguf | Q3_K_S | 3 | 53.28 GB| 55.78 GB | very small, high quality loss | | wintergoliath-123b.Q3_K_M.gguf | Q3_K_M | 3 | 59.48 GB| 61.98 GB | very small, high quality loss | | wintergoliath-123b.Q3_K_L.gguf | Q3_K_L | 3 | 64.80 GB| 67.30 GB | small, substantial quality loss | | wintergoliath-123b.Q4_0.gguf | Q4_0 | 4 | 69.68 GB| 72.18 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | wintergoliath-123b.Q4_K_S.gguf | Q4_K_S | 4 | 70.21 GB| 72.71 GB | small, greater quality loss | | wintergoliath-123b.Q4_K_M.gguf | Q4_K_M | 4 | 74.20 GB| 76.70 GB | medium, balanced quality - recommended | | wintergoliath-123b.Q5_0.gguf | Q5_0 | 5 | 85.11 GB| 87.61 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | wintergoliath-123b.Q5_K_S.gguf | Q5_K_S | 5 | 85.11 GB| 87.61 GB | large, low quality loss - recommended | | wintergoliath-123b.Q5_K_M.gguf | Q5_K_M | 5 | 87.44 GB| 89.94 GB | large, very low quality loss - recommended | | wintergoliath-123b.Q6_K.gguf | Q6_K | 6 | 101.51 GB| 104.01 GB | very large, extremely low quality loss | | wintergoliath-123b.Q8_0.gguf | Q8_0 | 8 | 131.48 GB| 133.98 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `wintergoliath-123b.Q6_K.gguf-split-a` * `wintergoliath-123b.Q6_K.gguf-split-b` ### q8_0 Please download: * `wintergoliath-123b.Q8_0.gguf-split-a` * `wintergoliath-123b.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat wintergoliath-123b.Q6_K.gguf-split-* > wintergoliath-123b.Q6_K.gguf && rm wintergoliath-123b.Q6_K.gguf-split-* cat wintergoliath-123b.Q8_0.gguf-split-* > wintergoliath-123b.Q8_0.gguf && rm wintergoliath-123b.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B wintergoliath-123b.Q6_K.gguf-split-a + wintergoliath-123b.Q6_K.gguf-split-b wintergoliath-123b.Q6_K.gguf del wintergoliath-123b.Q6_K.gguf-split-a wintergoliath-123b.Q6_K.gguf-split-b COPY /B wintergoliath-123b.Q8_0.gguf-split-a + wintergoliath-123b.Q8_0.gguf-split-b wintergoliath-123b.Q8_0.gguf del wintergoliath-123b.Q8_0.gguf-split-a wintergoliath-123b.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/WinterGoliath-123b-GGUF and below it, a specific filename to download, such as: wintergoliath-123b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/WinterGoliath-123b-GGUF wintergoliath-123b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/WinterGoliath-123b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WinterGoliath-123b-GGUF wintergoliath-123b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m wintergoliath-123b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./wintergoliath-123b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./wintergoliath-123b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Charles McSneed's WinterGoliath 123B This is a merge of [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [WinterGoddess](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). Made using [mergekit](https://github.com/cg123/mergekit). Smarter than Goliath, but maybe a bit more aligned? Not sure. Needs testing. # Benchmarks ### NeoEvalPlusN_benchmark [My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) | Test name | Goliath | WinterGoliath | | ---------- | ---------- | ------- | | B | 3 | 3 | | C | 2 | 2 | | D | 1 | 2 | | S | 5 | 5.5 | | P | 6 | 6 | | Total | 17 | 18.5 | ### Kanye Test WinterGoliath kinda gets the rhyme, Goliath doesn't. ![Kanye test](kanye_test_winter_vs_goliath.png) <!-- original-model-card end -->
Trelis/openchat-3.5-0106-function-calling-v3
Trelis
2024-01-12T12:41:29Z
0
0
null
[ "region:us" ]
null
2024-01-12T12:40:43Z
# Function Calling OpenChat Model Please refer to [this model](https://huggingface.co/Trelis/openchat_3.5-function-calling-v3). From a function calling standpoint, the performance of the original OpenChat model is better.
Trelis/openchat-3.5-1210-function-calling-v3
Trelis
2024-01-12T12:40:00Z
0
0
null
[ "region:us" ]
null
2024-01-12T12:39:19Z
# Function Calling OpenChat Model Please refer to [this model](https://huggingface.co/Trelis/openchat_3.5-function-calling-v3).
jysssacc/opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01
jysssacc
2024-01-12T12:30:58Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T12:29:26Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_fine_lr5e-05_bs10_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 3.3863 | | 3.5125 | 2.0 | 126 | 3.4343 | | 3.5125 | 3.0 | 189 | 3.5514 | | 2.5906 | 4.0 | 252 | 3.8081 | | 1.618 | 5.0 | 315 | 4.1019 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-12T12:25:47Z
90
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:finetune:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T18:44:54Z
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer model-index: - name: mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_fine_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1386 | 1.0 | 157 | 0.0055 | | 0.0205 | 2.0 | 314 | 0.0005 | | 0.0242 | 3.0 | 471 | 0.0974 | | 0.0676 | 4.0 | 628 | 0.0045 | | 0.0484 | 5.0 | 785 | 0.0014 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T12:22:49Z
22
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "jondurbin/bagel-7b-v0.1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T12:17:53Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - jondurbin/bagel-7b-v0.1 --- # bagel-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp bagel-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [jondurbin/bagel-7b-v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: jondurbin/bagel-7b-v0.1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
TheBloke/Open_Gpt4_8x7B_v0.2-AWQ
TheBloke
2024-01-12T12:18:43Z
17
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - AWQ - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> ## Description This repo contains AWQ model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). **MIXTRAL AWQ** This is a Mixtral AWQ model. For AutoAWQ inference, please install AutoAWQ 0.1.8 or later. Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git` vLLM: version 0.2.6 is confirmed to support Mixtral AWQs. TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. AWQ models are supported by (note that not all of these may support Mixtral models yet - see above): - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Open_Gpt4_8x7B_v0.2-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Open_Gpt4_8x7B_v0.2-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Open_Gpt4_8x7B_v0.2-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Open_Gpt4_8x7B_v0.2-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Open_Gpt4_8x7B_v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Open_Gpt4_8x7B_v0.2-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
TheBloke/Open_Gpt4_8x7B_v0.2-GGUF
TheBloke
2024-01-12T12:06:55Z
3,544
18
transformers
[ "transformers", "gguf", "mixtral", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "base_model:quantized:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "region:us" ]
null
2024-01-12T11:40:11Z
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - GGUF - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> ## Description This repo contains GGUF format model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [open_gpt4_8x7b_v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q2_K.gguf) | Q2_K | 2 | 17.17 GB| 19.67 GB | smallest, significant quality loss - not recommended for most purposes | | [open_gpt4_8x7b_v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 22.48 GB| 24.98 GB | very small, high quality loss | | [open_gpt4_8x7b_v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [open_gpt4_8x7b_v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 28.38 GB| 30.88 GB | medium, balanced quality - recommended | | [open_gpt4_8x7b_v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [open_gpt4_8x7b_v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 33.23 GB| 35.73 GB | large, very low quality loss - recommended | | [open_gpt4_8x7b_v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [open_gpt4_8x7b_v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Open_Gpt4_8x7B_v0.2-GGUF and below it, a specific filename to download, such as: open_gpt4_8x7b_v0.2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m open_gpt4_8x7b_v0.2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ``` <!-- original-model-card end -->
MaziyarPanahi/Marcoroni-neural-chat-7B-v1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T12:02:47Z
22
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Toten5/Marcoroni-neural-chat-7B-v1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T11:58:01Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Toten5/Marcoroni-neural-chat-7B-v1 --- # Marcoroni-neural-chat-7B-v1-Mistral-7B-Instruct-v0.2-slerp Marcoroni-neural-chat-7B-v1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Toten5/Marcoroni-neural-chat-7B-v1](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Toten5/Marcoroni-neural-chat-7B-v1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Marcoroni-neural-chat-7B-v1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
s3nh/arkanbima-Aethizin-10.7B-GGUF
s3nh
2024-01-12T12:00:31Z
0
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T10:06:22Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/arkanbima/Aethizin-10.7B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
Aedelon/ppo-SnowballTarget
Aedelon
2024-01-12T11:59:23Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-01-12T11:59:20Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Aedelon/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
liuyuweitarek/all-MiniLM-L12-neo-300-seperate
liuyuweitarek
2024-01-12T11:47:57Z
47
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2024-01-12T10:24:15Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # liuyuweitarek/all-MiniLM-L12-neo-300-seperate This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("liuyuweitarek/all-MiniLM-L12-neo-300-seperate") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Manu8/Reinforce-copter
Manu8
2024-01-12T11:47:56Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T11:47:54Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 8.20 +/- 8.95 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
clarin-knext/herbert-base-reranker-msmarco
clarin-knext
2024-01-12T11:44:02Z
108
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "pl", "arxiv:2305.19840", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T13:14:27Z
--- license: cc-by-sa-4.0 language: - pl --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: [email protected] How to use: With sentence transformers: ``` from sentence_transformers import CrossEncoder model_path = "clarin-knext/herbert-base-reranker-msmarco" model = CrossEncoder(model_path, max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` With transformers: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_path = "clarin-knext/herbert-base-reranker-msmarco" model = AutoModelForSequenceClassification.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) features = tokenizer(['Jakie miasto jest stolica Polski?', 'Stolicą Polski jest Warszawa.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ```
MaziyarPanahi/Mistral-7B-KNUT-v0.3-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T11:41:14Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Herry443/Mistral-7B-KNUT-v0.3", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T11:36:26Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Herry443/Mistral-7B-KNUT-v0.3 --- # Mistral-7B-KNUT-v0.3-Mistral-7B-Instruct-v0.2-slerp Mistral-7B-KNUT-v0.3-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Herry443/Mistral-7B-KNUT-v0.3](https://huggingface.co/Herry443/Mistral-7B-KNUT-v0.3) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Herry443/Mistral-7B-KNUT-v0.3 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Mistral-7B-KNUT-v0.3-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
WizardLMTeam/WizardMath-7B-V1.1
WizardLMTeam
2024-01-12T11:39:28Z
135,353
76
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-19T08:09:17Z
--- inference: false language: - en pipeline_tag: text-generation --- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p style="font-size:28px;" align="center"> 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p> <p align="center"> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p> <p align="center"> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News [12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it. [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1. [12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1. | Model | Checkpoint | Paper | GSM8k | MATH | Demo| | ----- |------| ---- |------|-------|-------| | **WizardMath-7B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** || | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** || | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | MPT-7B | 6.8 | 3.0 | |Llama 1-7B | 11.0 | 2.9 | |Llama 2-7B|12.3 |2.8 | |Yi-6b| 32.6 |5.8 | |Mistral-7B|37.8 |9.1 | |Qwen-7b|47.8 |9.3 | | RFT-7B | 50.3 | -- | | MAmmoTH-7B (COT) | 50.5 | 10.4 | | WizardMath-7B-V1.0 | 54.9 | 10.7 | |Abel-7B-001 |59.7 |13 | | MetaMath-7B | 66.5 | 19.8 | | Arithmo-Mistral-7B | 74.7 | 25.3 | |MetaMath-Mistral-7B|77.7 |28.2 | |Abel-7B-002 | 80.4 | 29.5 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | Llemma-34B | 51.5 | 25.0 | | Minerva-62B | 52.4 | 27.6 | | Llama 2-70B | 56.8 | 13.5 | | DeepSeek 67B | 63.4 | -- | | Gork 33B | 62.9 | 23.9 | | MAmmoTH-70B | 72.4 | 21.1 | | Yi-34B | 67.9 | 15.9 | | Mixtral 8x7B | 74.4 | 28.4 | | MetaMath-70B | 82.3 | 26.6 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## ❗ Data Contamination Check: Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on GSM8k and MATH test set. 🔥 ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
Foma/distilbert-base-uncased-finetuned-squad
Foma
2024-01-12T11:34:32Z
12
0
transformers
[ "transformers", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-12T10:55:56Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.211 | 1.0 | 5533 | 1.1651 | | 0.9643 | 2.0 | 11066 | 1.1298 | | 0.7409 | 3.0 | 16599 | 1.1577 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.1 - Datasets 2.16.1 - Tokenizers 0.15.0
stablediffusionapi/cetusmixcodav2
stablediffusionapi
2024-01-12T11:28:19Z
29
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-12T11:25:51Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # cetusmix_codav2 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/19094976411705058612.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "cetusmixcodav2" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/cetusmixcodav2) Model link: [View model](https://modelslab.com/models/cetusmixcodav2) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "cetusmixcodav2", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
Meina/MeinaUnreal_V5
Meina
2024-01-12T11:27:10Z
142
1
diffusers
[ "diffusers", "safetensors", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-12T11:24:43Z
--- license: creativeml-openrail-m language: - en library_name: diffusers ---
Husain/fullstop-punctuation-multilingual-sonar-base
Husain
2024-01-12T11:25:55Z
93
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "token-classification", "punctuation prediction", "punctuation", "en", "de", "fr", "it", "nl", "multilingual", "dataset:wmt/europarl", "dataset:SoNaR", "arxiv:2301.03319", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T11:04:03Z
--- language: - en - de - fr - it - nl - multilingual tags: - punctuation prediction - punctuation datasets: - wmt/europarl - SoNaR license: mit widget: - text: "Ho sentito che ti sei laureata il che mi fa molto piacere" example_title: "Italian" - text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre" example_title: "French" - text: "Ist das eine Frage Frau Müller" example_title: "German" - text: "My name is Clara and I live in Berkeley California" example_title: "English" - text: "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat" example_title: "Dutch" metrics: - f1 --- This model predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language. This multilanguage model was trained on the [Europarl Dataset](https://huggingface.co/datasets/wmt/europarl) provided by the [SEPP-NLG Shared Task](https://sites.google.com/view/sentence-segmentation) and for the Dutch language we included the [SoNaR Dataset](http://hdl.handle.net/10032/tm-a2-h5). *Please note that this dataset consists of political speeches. Therefore the model might perform differently on texts from other domains.* The model restores the following punctuation markers: **"." "," "?" "-" ":"** ## Sample Code We provide a simple python package that allows you to process text of any length. ## Install To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/): ```bash pip install deepmultilingualpunctuation ``` ### Restore Punctuation ```python from deepmultilingualpunctuation import PunctuationModel model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base") text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller" result = model.restore_punctuation(text) print(result) ``` **output** > My name is Clara and I live in Berkeley, California. Ist das eine Frage, Frau Müller? ### Predict Labels ```python from deepmultilingualpunctuation import PunctuationModel model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base") text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller" clean_text = model.preprocess(text) labled_words = model.predict(clean_text) print(labled_words) ``` **output** > [['My', '0', 0.99998856], ['name', '0', 0.9999708], ['is', '0', 0.99975926], ['Clara', '0', 0.6117834], ['and', '0', 0.9999014], ['I', '0', 0.9999808], ['live', '0', 0.9999666], ['in', '0', 0.99990165], ['Berkeley', ',', 0.9941764], ['California', '.', 0.9952892], ['Ist', '0', 0.9999577], ['das', '0', 0.9999678], ['eine', '0', 0.99998224], ['Frage', ',', 0.9952265], ['Frau', '0', 0.99995995], ['Müller', '?', 0.972517]] ## Results The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for the different languages: | Label | English | German | French|Italian| Dutch | | ------------- | -------- | ------ | ----- | ----- | ----- | | 0 | 0.990 | 0.996 | 0.991 | 0.988 | 0.994 | | . | 0.924 | 0.951 | 0.921 | 0.917 | 0.959 | | ? | 0.825 | 0.829 | 0.800 | 0.736 | 0.817 | | , | 0.798 | 0.937 | 0.811 | 0.778 | 0.813 | | : | 0.535 | 0.608 | 0.578 | 0.544 | 0.657 | | - | 0.345 | 0.384 | 0.353 | 0.344 | 0.464 | | macro average | 0.736 | 0.784 | 0.742 | 0.718 | 0.784 | | micro average | 0.975 | 0.987 | 0.977 | 0.972 | 0.983 | ## Languages ### Models | Languages | Model | | ------------------------------------------ | ------------------------------------------------------------ | | English, Italian, French and German | [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large) | | English, Italian, French, German and Dutch | [oliverguhr/fullstop-punctuation-multilingual-sonar-base](https://huggingface.co/oliverguhr/fullstop-punctuation-multilingual-sonar-base) | | Dutch | [oliverguhr/fullstop-dutch-sonar-punctuation-prediction](https://huggingface.co/oliverguhr/fullstop-dutch-sonar-punctuation-prediction) | ### Community Models | Languages | Model | | ------------------------------------------ | ------------------------------------------------------------ | |English, German, French, Spanish, Bulgarian, Italian, Polish, Dutch, Czech, Portugese, Slovak, Slovenian| [kredor/punctuate-all](https://huggingface.co/kredor/punctuate-all) | | Catalan | [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction) | You can use different models by setting the model parameter: ```python model = PunctuationModel(model = "oliverguhr/fullstop-dutch-punctuation-prediction") ``` ## How to cite us ``` @article{guhr-EtAl:2021:fullstop, title={FullStop: Multilingual Deep Models for Punctuation Prediction}, author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim}, booktitle = {Proceedings of the Swiss Text Analytics Conference 2021}, month = {June}, year = {2021}, address = {Winterthur, Switzerland}, publisher = {CEUR Workshop Proceedings}, url = {http://ceur-ws.org/Vol-2957/sepp_paper4.pdf} } ``` ``` @misc{https://doi.org/10.48550/arxiv.2301.03319, doi = {10.48550/ARXIV.2301.03319}, url = {https://arxiv.org/abs/2301.03319}, author = {Vandeghinste, Vincent and Guhr, Oliver}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7}, title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers}, publisher = {arXiv}, year = {2023}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
MaziyarPanahi/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-12T11:18:57Z
20
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "jondurbin/bagel-dpo-7b-v0.1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-12T11:14:10Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - jondurbin/bagel-dpo-7b-v0.1 --- # bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: jondurbin/bagel-dpo-7b-v0.1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Manu8/Reinforce-CartPole
Manu8
2024-01-12T11:13:15Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-12T11:13:07Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction