modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
SulthanTriesToCode/TinyLlama-1.1B-Chat-v1.0-OpenOrca
SulthanTriesToCode
2024-03-04T13:50:58Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T13:49:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
almanach/camembert-base
almanach
2024-03-04T13:48:46Z
1,317,379
80
transformers
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: fr license: mit datasets: - oscar --- # CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to [Camembert Website](https://camembert-model.fr/) ## Pre-trained models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `camembert-base` | 110M | Base | OSCAR (138 GB of text) | | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) | | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) | | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) | | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) | | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) | ## How to use CamemBERT with HuggingFace ##### Load CamemBERT and its sub-word tokenizer : ```python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert.eval() # disable dropout (or leave in train mode to finetune) ``` ##### Filling masks using pipeline ```python from transformers import pipeline camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb") results = camembert_fill_mask("Le camembert est un fromage de <mask>!") # results #[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370}, #{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616}, #{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364}, # {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236}, #{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}] ``` ##### Extract contextual embedding features from Camembert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") # ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6] # NB: Can be done in one step : tokenize.encode("J'aime le camembert !") # Feed tokens to Camembert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = camembert(encoded_sentence) # embeddings.detach() # embeddings.size torch.Size([1, 10, 768]) #tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302], # [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802], # [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677], # ..., ``` ##### Extract contextual embedding features from all Camembert layers ```python from transformers import CamembertConfig # (Need to reload the model with new config) config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True) camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config) embeddings, _, all_layer_embeddings = camembert(encoded_sentence) # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers) all_layer_embeddings[5] # layer 5 contextual embedding : size torch.Size([1, 10, 768]) #tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095], # [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914], # [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061], # ..., ``` ## Authors CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. ## Citation If you use our work, please cite: ```bibtex @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
nlpguy/AlloyIngotNeo
nlpguy
2024-03-04T13:48:34Z
52
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:liminerity/Omningotex-7b-slerp", "base_model:merge:liminerity/Omningotex-7b-slerp", "base_model:nlpguy/AlloyIngot", "base_model:merge:nlpguy/AlloyIngot", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-13T13:27:10Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - nlpguy/AlloyIngot - liminerity/Omningotex-7b-slerp model-index: - name: AlloyIngotNeo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 75.95 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeo name: Open LLM Leaderboard --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [nlpguy/AlloyIngot](https://huggingface.co/nlpguy/AlloyIngot) * [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: liminerity/Omningotex-7b-slerp dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: model: path: liminerity/Omningotex-7b-slerp - layer_range: [0, 32] model: model: path: nlpguy/AlloyIngot ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngotNeo) | Metric |Value| |---------------------------------|----:| |Avg. |76.02| |AI2 Reasoning Challenge (25-Shot)|72.87| |HellaSwag (10-Shot) |88.99| |MMLU (5-Shot) |64.61| |TruthfulQA (0-shot) |75.95| |Winogrande (5-shot) |84.29| |GSM8k (5-shot) |69.45|
nlpguy/AlloyIngot
nlpguy
2024-03-04T13:48:30Z
52
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:Gille/StrangeMerges_21-7B-slerp", "base_model:merge:Gille/StrangeMerges_21-7B-slerp", "base_model:eren23/dpo-binarized-NeutrixOmnibe-7B", "base_model:merge:eren23/dpo-binarized-NeutrixOmnibe-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-13T13:00:43Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - eren23/dpo-binarized-NeutrixOmnibe-7B - Gille/StrangeMerges_21-7B-slerp model-index: - name: AlloyIngot results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.05 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 75.12 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot name: Open LLM Leaderboard --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B) * [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: Gille/StrangeMerges_21-7B-slerp dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.3, 0.5, 0.7, 1.0] - filter: mlp value: [1.0, 0.7, 0.5, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: model: path: eren23/dpo-binarized-NeutrixOmnibe-7B - layer_range: [0, 32] model: model: path: Gille/StrangeMerges_21-7B-slerp ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngot) | Metric |Value| |---------------------------------|----:| |Avg. |76.20| |AI2 Reasoning Challenge (25-Shot)|73.98| |HellaSwag (10-Shot) |89.05| |MMLU (5-Shot) |64.83| |TruthfulQA (0-shot) |75.12| |Winogrande (5-shot) |85.08| |GSM8k (5-shot) |69.14|
nlpguy/Hermes-low-tune
nlpguy
2024-03-04T13:48:26Z
53
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:openaccess-ai-collective/dpopenhermes-alpha-v0", "base_model:finetune:openaccess-ai-collective/dpopenhermes-alpha-v0", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T22:33:51Z
--- license: apache-2.0 tags: - mergekit - merge base_model: - openaccess-ai-collective/dpopenhermes-alpha-v0 - simonveitner/Math-OpenHermes-2.5-Mistral-7B model-index: - name: Hermes-low-tune results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.37 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.47 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune name: Open LLM Leaderboard --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [openaccess-ai-collective/dpopenhermes-alpha-v0](https://huggingface.co/openaccess-ai-collective/dpopenhermes-alpha-v0) * [simonveitner/Math-OpenHermes-2.5-Mistral-7B](https://huggingface.co/simonveitner/Math-OpenHermes-2.5-Mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: simonveitner/Math-OpenHermes-2.5-Mistral-7B dtype: float16 merge_method: slerp parameters: t: - value: 0.5 slices: - sources: - layer_range: [0, 32] model: simonveitner/Math-OpenHermes-2.5-Mistral-7B - layer_range: [0, 32] model: openaccess-ai-collective/dpopenhermes-alpha-v0 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Hermes-low-tune) | Metric |Value| |---------------------------------|----:| |Avg. |67.18| |AI2 Reasoning Challenge (25-Shot)|63.99| |HellaSwag (10-Shot) |83.75| |MMLU (5-Shot) |63.60| |TruthfulQA (0-shot) |51.37| |Winogrande (5-shot) |77.90| |GSM8k (5-shot) |62.47|
nlpguy/ColorShadow-7B
nlpguy
2024-03-04T13:48:06Z
1,553
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-30T13:13:34Z
--- license: apache-2.0 tags: - merge model-index: - name: ColorShadow-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.83 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.56 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 55.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B name: Open LLM Leaderboard --- # ColorShadow-7B This is a Gradient-SLERP merge between diffnamehard/Mistral-CatMacaroni-slerp-7B and cookinai/Valkyrie-V1 performed using mergekit. Here is the config file used: ``` slices: - sources: - model: diffnamehard/Mistral-CatMacaroni-slerp-7B layer_range: [0, 32] - model: cookinai/Valkyrie-V1 layer_range: [0, 32] merge_method: slerp base_model: diffnamehard/Mistral-CatMacaroni-slerp-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B) | Metric |Value| |---------------------------------|----:| |Avg. |68.34| |AI2 Reasoning Challenge (25-Shot)|67.83| |HellaSwag (10-Shot) |85.15| |MMLU (5-Shot) |61.69| |TruthfulQA (0-shot) |59.56| |Winogrande (5-shot) |80.58| |GSM8k (5-shot) |55.19|
zzttbrdd/sn6_04g
zzttbrdd
2024-03-04T13:47:55Z
3
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T05:44:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nlpguy/Westgate
nlpguy
2024-03-04T13:47:50Z
54
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:jsfs11/TurdusTrixBeagle-DARETIES-7B", "base_model:merge:jsfs11/TurdusTrixBeagle-DARETIES-7B", "base_model:senseable/garten2-7b", "base_model:merge:senseable/garten2-7b", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T13:20:39Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - jsfs11/TurdusTrixBeagle-DARETIES-7B - senseable/garten2-7b model-index: - name: Westgate results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.71 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Westgate name: Open LLM Leaderboard --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [jsfs11/TurdusTrixBeagle-DARETIES-7B](https://huggingface.co/jsfs11/TurdusTrixBeagle-DARETIES-7B) * [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: senseable/garten2-7b dtype: float16 merge_method: slerp parameters: t: - value: [0.0, 0.3, 0.5, 0.7, 1.0] slices: - sources: - layer_range: [0, 32] model: model: path: jsfs11/TurdusTrixBeagle-DARETIES-7B - layer_range: [0, 32] model: model: path: senseable/garten2-7b ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Westgate) | Metric |Value| |---------------------------------|----:| |Avg. |73.84| |AI2 Reasoning Challenge (25-Shot)|71.42| |HellaSwag (10-Shot) |88.14| |MMLU (5-Shot) |65.11| |TruthfulQA (0-shot) |62.59| |Winogrande (5-shot) |85.71| |GSM8k (5-shot) |70.05|
nlpguy/Lelantos-low-tune
nlpguy
2024-03-04T13:47:42Z
54
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:SanjiWatsuki/Lelantos-7B", "base_model:merge:SanjiWatsuki/Lelantos-7B", "base_model:openaccess-ai-collective/DPOpenHermes-7B-v2", "base_model:merge:openaccess-ai-collective/DPOpenHermes-7B-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T17:24:45Z
--- license: apache-2.0 tags: - mergekit - merge base_model: - openaccess-ai-collective/DPOpenHermes-7B-v2 - SanjiWatsuki/Lelantos-7B model-index: - name: Lelantos-low-tune results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.33 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune name: Open LLM Leaderboard --- # mergedtwo This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) as a base. ### Models Merged The following models were included in the merge: * merged * [SanjiWatsuki/Lelantos-7B](https://huggingface.co/SanjiWatsuki/Lelantos-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: openaccess-ai-collective/DPOpenHermes-7B-v2 dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 32] model: openaccess-ai-collective/DPOpenHermes-7B-v2 - layer_range: [0, 32] model: merged parameters: weight: 0.5 - layer_range: [0, 32] model: SanjiWatsuki/Lelantos-7B parameters: weight: 0.5 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Lelantos-low-tune) | Metric |Value| |---------------------------------|----:| |Avg. |70.82| |AI2 Reasoning Challenge (25-Shot)|67.06| |HellaSwag (10-Shot) |86.06| |MMLU (5-Shot) |64.11| |TruthfulQA (0-shot) |61.33| |Winogrande (5-shot) |79.56| |GSM8k (5-shot) |66.79|
ChuckMcSneed/WinterGoddess-1.4x-70b-32k
ChuckMcSneed
2024-03-04T13:44:30Z
51
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-30T08:40:12Z
--- license: llama2 tags: - mergekit - merge model-index: - name: WinterGoddess-1.4x-70b-32k results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.16 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.12 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k name: Open LLM Leaderboard --- This is a 32k version of Sao10K/WinterGoddess-1.4x-70B-L2, extended using method discussed [here](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2). # Quants Thanks for GGUF, [@Nexesenex](https://huggingface.co/Nexesenex)! - [GGUF](https://huggingface.co/Nexesenex/ChuckMcSneed_WinterGoddess-1.4x-70b-32k-iMat.GGUF) # Benchmarks ### NeoEvalPlusN_benchmark [My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) | Test name | WinterGoddess | WinterGoddess-32k | | ---------- | ---------- | ------- | | B | 2 | 2.5 | | C | 1.5 | 2 | | D | 3 | 0 | | S | 2.75 | 1.5 | | P | 5.5 | 2.25 | | Total | 14.75 | 8.25 | ### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) [Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K| |---------------------------------------|-------|-----|---------|-----|----------|----------|-----| |Sao10K/WinterGoddess-1.4x-70B-L2 |73.23 |72.78|90.11 |71.12|65.76 |85 |54.59| |ChuckMcSneed/WinterGoddess-1.4x-70b-32k|69.4 |71.16|89.12 |66.42|63.87 |82.56 |43.29| |Difference |3.83 |1.62 |0.99 |4.7 |1.89 |2.44 |11.3 | Here the losses seem far less brutal than on my bench. It seems that extending with LongLORA kills MMLU and GSM8K performance. Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__WinterGoddess-1.4x-70b-32k) | Metric |Value| |---------------------------------|----:| |Avg. |69.40| |AI2 Reasoning Challenge (25-Shot)|71.16| |HellaSwag (10-Shot) |89.12| |MMLU (5-Shot) |66.42| |TruthfulQA (0-shot) |63.87| |Winogrande (5-shot) |82.56| |GSM8k (5-shot) |43.29|
MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1
MaziyarPanahi
2024-03-04T13:42:47Z
61
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-12T21:44:40Z
--- license: apache-2.0 library_name: transformers tags: - merge pipeline_tag: text-generation model-index: - name: TheTop-5x7B-Instruct-S3-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.47 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 name: Open LLM Leaderboard --- Merge of top 7B models and the SLERP of other 7B models > mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention. > > ## Eval > ```python > { "all": { "acc": 0.6571641282160704, "acc_stderr": 0.031918970852064334, "acc_norm": 0.6561506230894164, "acc_norm_stderr": 0.03258982989656136, "mc1": 0.4834761321909425, "mc1_stderr": 0.017493940190057723, "mc2": 0.6447306680251751, "mc2_stderr": 0.015519245883344577 }, "harness|arc:challenge|25": { "acc": 0.689419795221843, "acc_stderr": 0.01352229209805306, "acc_norm": 0.7090443686006825, "acc_norm_stderr": 0.013273077865907595 }, "harness|hellaswag|10": { "acc": 0.7168890659231228, "acc_stderr": 0.004495891440519419, "acc_norm": 0.8800039832702649, "acc_norm_stderr": 0.0032429275808698544 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7105263157894737, "acc_stderr": 0.03690677986137283, "acc_norm": 0.7105263157894737, "acc_norm_stderr": 0.03690677986137283 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6981132075471698, "acc_stderr": 0.02825420034443866, "acc_norm": 0.6981132075471698, "acc_norm_stderr": 0.02825420034443866 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.049888765156985884, "acc_norm": 0.56, "acc_norm_stderr": 0.049888765156985884 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.27, "acc_stderr": 0.0446196043338474, "acc_norm": 0.27, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6589595375722543, "acc_stderr": 0.03614665424180826, "acc_norm": 0.6589595375722543, "acc_norm_stderr": 0.03614665424180826 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5175438596491229, "acc_stderr": 0.04700708033551038, "acc_norm": 0.5175438596491229, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4312169312169312, "acc_stderr": 0.02550648169813821, "acc_norm": 0.4312169312169312, "acc_norm_stderr": 0.02550648169813821 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.48412698412698413, "acc_stderr": 0.04469881854072606, "acc_norm": 0.48412698412698413, "acc_norm_stderr": 0.04469881854072606 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7838709677419354, "acc_stderr": 0.02341529343356853, "acc_norm": 0.7838709677419354, "acc_norm_stderr": 0.02341529343356853 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4975369458128079, "acc_stderr": 0.03517945038691063, "acc_norm": 0.4975369458128079, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252607, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252607 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7878787878787878, "acc_stderr": 0.031922715695483, "acc_norm": 0.7878787878787878, "acc_norm_stderr": 0.031922715695483 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267045, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267045 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033456, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033456 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402534, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402534 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.029045600290616255, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.029045600290616255 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.680672268907563, "acc_stderr": 0.030283995525884396, "acc_norm": 0.680672268907563, "acc_norm_stderr": 0.030283995525884396 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8385321100917431, "acc_stderr": 0.015776239256163224, "acc_norm": 0.8385321100917431, "acc_norm_stderr": 0.015776239256163224 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5138888888888888, "acc_stderr": 0.03408655867977749, "acc_norm": 0.5138888888888888, "acc_norm_stderr": 0.03408655867977749 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.024509803921568603, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.024509803921568603 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8143459915611815, "acc_stderr": 0.025310495376944856, "acc_norm": 0.8143459915611815, "acc_norm_stderr": 0.025310495376944856 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6860986547085202, "acc_stderr": 0.031146796482972465, "acc_norm": 0.6860986547085202, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5, "acc_stderr": 0.04745789978762494, "acc_norm": 0.5, "acc_norm_stderr": 0.04745789978762494 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.03989139859531771, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.03989139859531771 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.02158649400128137, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.02158649400128137 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.0446196043338474, "acc_norm": 0.73, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8288633461047255, "acc_stderr": 0.013468201614066307, "acc_norm": 0.8288633461047255, "acc_norm_stderr": 0.013468201614066307 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7514450867052023, "acc_stderr": 0.023267528432100174, "acc_norm": 0.7514450867052023, "acc_norm_stderr": 0.023267528432100174 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4480446927374302, "acc_stderr": 0.016631976628930595, "acc_norm": 0.4480446927374302, "acc_norm_stderr": 0.016631976628930595 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7320261437908496, "acc_stderr": 0.025360603796242553, "acc_norm": 0.7320261437908496, "acc_norm_stderr": 0.025360603796242553 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.707395498392283, "acc_stderr": 0.02583989833487798, "acc_norm": 0.707395498392283, "acc_norm_stderr": 0.02583989833487798 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7530864197530864, "acc_stderr": 0.023993501709042107, "acc_norm": 0.7530864197530864, "acc_norm_stderr": 0.023993501709042107 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4791395045632334, "acc_stderr": 0.012759117066518015, "acc_norm": 0.4791395045632334, "acc_norm_stderr": 0.012759117066518015 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7058823529411765, "acc_stderr": 0.02767846864214472, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.02767846864214472 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6862745098039216, "acc_stderr": 0.018771683893528176, "acc_norm": 0.6862745098039216, "acc_norm_stderr": 0.018771683893528176 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784603, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784603 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454115, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454115 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.4834761321909425, "mc1_stderr": 0.017493940190057723, "mc2": 0.6447306680251751, "mc2_stderr": 0.015519245883344577 }, "harness|winogrande|5": { "acc": 0.8366219415943172, "acc_stderr": 0.010390695970273764 }, "harness|gsm8k|5": { "acc": 0.7202426080363912, "acc_stderr": 0.012364384016735319 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S3-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |74.03| |AI2 Reasoning Challenge (25-Shot)|70.90| |HellaSwag (10-Shot) |88.00| |MMLU (5-Shot) |65.13| |TruthfulQA (0-shot) |64.47| |Winogrande (5-shot) |83.66| |GSM8k (5-shot) |72.02|
bjaburg/gpt2-imdb
bjaburg
2024-03-04T13:41:04Z
0
0
null
[ "tensorboard", "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T13:41:01Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
model-hub/DeciLM-7B
model-hub
2024-03-04T13:36:47Z
17
0
transformers
[ "transformers", "safetensors", "text-generation", "custom_code", "en", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-12-14T16:32:51Z
--- license: apache-2.0 language: - en --- # DeciLM-7B DeciLM-7B is a 7.04 billion parameter decoder-only text generation model, released under the Apache 2.0 license. At the time of release, DeciLM-7B is the top-performing 7B base language model on the Open LLM Leaderboard. With support for an 8K-token sequence length, this highly efficient model uses variable Grouped-Query Attention (GQA) to achieve a superior balance between accuracy and computational efficiency. The model's architecture was generated using Deci's proprietary Neural Architecture Search technology, AutoNAC. ## Model Details ### Model Description Deci developed and released the DeciLM-7B language model, a pre-trained, high-efficiency text generation model with 7 billion parameters. DeciLM-7B is not only the most accurate 7B base model, but it also outpaces all models in its class with a throughput that is up to 4.4x that of Mistral-7B's. An instruct version [DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) has also been released. - **Developed by:** [Deci](https://deci.ai/) - **Model type:** DeciLM is an auto-regressive language model using an optimized transformer decoder architecture that includes variable Grouped-Query Attention. - **Language(s) (NLP):** English - **License:** Apache 2.0 ## Model Architecture | Parameters | Layers | Heads | Sequence Length | GQA num_key_value_heads* | |:----------|:----------|:----------|:----------|:----------| | 7.04 billion | 32 | 32 | 8192 | Variable | *AutoNAC was employed to optimize the selection of the GQA num_key_value_heads for each layer. ### Model Sources - **Blog:** [DeciLM-7B Technical Blog](https://deci.ai/blog/introducing-DeciLM-7B-the-fastest-and-most-accurate-7b-large-language-model-to-date) - **Demo:** [DeciLM-7B-instruct Demo](https://huggingface.co/spaces/Deci/DeciLM-7B-instruct) - **Finetuning Notebook:** [DeciLM-7B Finetuning Notebook](https://colab.research.google.com/drive/1kEV6i96AQ94xTCvSd11TxkEaksTb5o3U?usp=sharing) - **Text Generation Notebook:** [DeciLM-7B-instruct Text Generation Notebook](https://bit.ly/declm-7b-instruct) ## Uses The model is intended for commercial and research use in English and can be fine-tuned for various tasks and languages. ## How to Get Started with the Model Use the code below to get started with the model. ```bibtex import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Deci/DeciLM-7B" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device) inputs = tokenizer.encode("In a shocking finding, scientists discovered a herd of unicorns living in", return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_p=0.95) print(tokenizer.decode(outputs[0])) ``` ## Evaluation Below are DeciLM-7B and DeciLM-7B-instruct's Open LLM Leaderboard results. | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |:----------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:| | DecilLM-7B | 61.55 | 59.39 | 82.51 | 59.76 | 40.33 | 79.95 | 47.38 | | DecilLM-7B-instruct | 63.19 | 61.01 | 82.37 | 60.24 | 49.75 | 79.72 | 46.02 | ### Runtime Benchmarks | Inference Tool | Hardware | Prompt length | Generation length | Generated tokens/sec | Batch Size | Number of Prompts | |:----------|:----------|:---------:|:---------:|:---------:|:---------:|:---------:| | HuggingFace (PyTorch) | A100 (SXM4-80GB-400W) | 512 | 512 | **1174** | 352 | 352 | | HuggingFace (PyTorch) | A100 (SXM4-80GB-400W) | 2048 | 2048 | **328** | 72 | 72 | | Infery-LLM | A100 (SXM4-80GB-400W)| 512 | 512 | **4559** | 1024 | 4096 | | Infery-LLM | A100 (SXM4-80GB-400W) | 2048 | 2048 | **3997** | 512 | 2048 | | Infery-LLM | A10 | 512 | 512 | **1345** | 128 | 512 | | Infery-LLM | A10 | 2048 | 2048 | **599** | 32 | 128 | - In order to replicate the results of the Hugging Face benchmarks, you can use this [code example](https://huggingface.co/Deci/DeciLM-7B/blob/main/benchmark_hf_model.py). - Infery-LLM, Deci's inference engine, features a suite of optimization algorithms, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To witness the full capabilities of Infery-LLM first-hand, we invite you to engage with our [interactive demo](https://console.deci.ai/infery-llm-demo). ## Ethical Considerations and Limitations DeciLM-7B is a new technology that comes with inherent risks associated with its use. The testing conducted so far has been primarily in English and does not encompass all possible scenarios. Like those of all large language models, DeciLM-7B's outputs are unpredictable, and the model may generate responses that are inaccurate, biased, or otherwise objectionable. Consequently, developers planning to use DeciLM-7B should undertake thorough safety testing and tuning designed explicitly for their intended applications of the model before deployment. ## How to Cite Please cite this model using this format. ```bibtex @misc{DeciFoundationModels, title = {DeciLM-7B}, author = {DeciAI Research Team}, year = {2023} url={https://huggingface.co/Deci/DeciLM-7B}, } ```
jan-hq/Mistral-7B-Instruct-v0.2-DARE
jan-hq
2024-03-04T13:36:21Z
15
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-12T09:12:10Z
--- language: - en license: apache-2.0 model-index: - name: Mistral-7B-Instruct-v0.2-DARE results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 75.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 49.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.36 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/Mistral-7B-Instruct-v0.2-DARE name: Open LLM Leaderboard --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a> - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This model uses the `DARE` method to merge [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) with 3 leading models in 12th Dec on [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): 1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) 2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) 3. [v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B) - base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) The yaml config file for this model is here: ```yaml base_model: mistralai/Mistral-7B-Instruct-v0.2 dtype: bfloat16 merge_method: dare_ties models: - model: mistralai/Mistral-7B-Instruct-v0.2 - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp parameters: density: 0.8 weight: 0.4 - model: Q-bert/MetaMath-Cybertron-Starling parameters: density: 0.8 weight: 0.3 - model: v1olet/v1olet_marcoroni-go-bruins-merge-7B parameters: density: 0.8 weight: 0.3 parameters: int8_mask: true ``` # Prompt template: - **ChatML** ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - **Alpaca** ``` {system_message} ### Instruction: {prompt} ### Response: ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Merger This is a test project for merging models. # Open LLM Leaderboard Evaluation Results Detailed results can be found here. | Metric | Value | |-----------------------|---------------------------| | Avg. | ?| | ARC (25-shot) | ? | | HellaSwag (10-shot) | ? | | MMLU (5-shot) | ?| | TruthfulQA (0-shot) | ? | | Winogrande (5-shot) | ? | | GSM8K (5-shot) | ? | # Acknowlegement - [mergekit](https://github.com/cg123/mergekit) - [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md) - [SLERP](https://github.com/Digitous/LLM-SLERP-Merge) - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__Mistral-7B-Instruct-v0.2-DARE) | Metric |Value| |---------------------------------|----:| |Avg. |55.84| |AI2 Reasoning Challenge (25-Shot)|61.95| |HellaSwag (10-Shot) |75.62| |MMLU (5-Shot) |49.99| |TruthfulQA (0-shot) |54.36| |Winogrande (5-shot) |74.98| |GSM8k (5-shot) |18.12|
jan-hq/stealth-v1.3
jan-hq
2024-03-04T13:36:19Z
1,353
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T01:46:19Z
--- language: - en license: apache-2.0 model-index: - name: stealth-v1.3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.49 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.71 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3 name: Open LLM Leaderboard --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto" > <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a > - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Prompt template ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ ** An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__stealth-v1.3) | Metric |Value| |---------------------------------|----:| |Avg. |71.12| |AI2 Reasoning Challenge (25-Shot)|67.49| |HellaSwag (10-Shot) |86.74| |MMLU (5-Shot) |64.45| |TruthfulQA (0-shot) |55.71| |Winogrande (5-shot) |80.74| |GSM8k (5-shot) |71.57|
jan-hq/stealth-v1.2
jan-hq
2024-03-04T13:36:13Z
1,346
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-28T15:54:05Z
--- language: - en license: apache-2.0 model-index: - name: stealth-v1.2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.23 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2 name: Open LLM Leaderboard --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a > - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Prompt template ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__stealth-v1.2) | Metric |Value| |---------------------------------|----:| |Avg. |70.68| |AI2 Reasoning Challenge (25-Shot)|66.38| |HellaSwag (10-Shot) |86.14| |MMLU (5-Shot) |64.33| |TruthfulQA (0-shot) |54.23| |Winogrande (5-shot) |80.74| |GSM8k (5-shot) |72.25|
jan-hq/supermario-slerp-v3
jan-hq
2024-03-04T13:36:11Z
58
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-12T11:01:00Z
--- language: - en license: apache-2.0 model-index: - name: supermario-slerp-v3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.71 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.77 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3 name: Open LLM Leaderboard --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a> - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This model uses the `Slerp` merge method from our 2 best models in 12th Dec: 1. [supermario-slerp-v2](https://huggingface.co/janhq/supermario-slerp-v2) 2. [supermario-v2](https://huggingface.co/janhq/supermario-v2) - base model: [supermario-slerp-v2](https://huggingface.co/janhq/supermario-slerp-v2) The yaml config file for this model is here: ```yaml slices: - sources: - model: janhq/supermario-slerp-v2 layer_range: [0, 32] - model: janhq/supermario-v2 layer_range: [0, 32] merge_method: slerp base_model: janhq/supermario-slerp-v2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Merger This is a test project for merging models. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found here. | Metric | Value | |-----------------------|---------------------------| | Avg. | ?| | ARC (25-shot) | ? | | HellaSwag (10-shot) | ? | | MMLU (5-shot) | ?| | TruthfulQA (0-shot) | ? | | Winogrande (5-shot) | ? | | GSM8K (5-shot) | ? | # Acknowlegement - [mergekit](https://github.com/cg123/mergekit) - [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md) - [SLERP](https://github.com/Digitous/LLM-SLERP-Merge) - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__supermario-slerp-v3) | Metric |Value| |---------------------------------|----:| |Avg. |72.22| |AI2 Reasoning Challenge (25-Shot)|69.28| |HellaSwag (10-Shot) |86.71| |MMLU (5-Shot) |65.11| |TruthfulQA (0-shot) |61.77| |Winogrande (5-shot) |80.51| |GSM8k (5-shot) |69.98|
jan-hq/supermario-v2
jan-hq
2024-03-04T13:36:09Z
180
7
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-11T12:49:07Z
--- language: - en license: apache-2.0 model-index: - name: supermario-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.58 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2 name: Open LLM Leaderboard --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a> - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This model uses the `DARE_TIES` merge method from 3 models: 1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) 2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) 3. [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) - base model: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) The yaml config file for this model here: ```yaml base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 merge_method: dare_ties models: - model: mistralai/Mistral-7B-v0.1 - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp parameters: density: 0.8 weight: 0.4 - model: Q-bert/MetaMath-Cybertron-Starling parameters: density: 0.8 weight: 0.3 - model: AIDC-ai-business/Marcoroni-7B-v3 parameters: density: 0.8 weight: 0.3 parameters: int8_mask: true ``` # Prompt template: - **ChatML** ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - **System** ``` ### System: {system} ### User: {user} ### Assistant: ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Merger This is a test project for merging models. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__supermario-v2). | Metric | Value | |-----------------------|---------------------------| | Avg. | 72.36| | ARC (25-shot) | 68.52 | | HellaSwag (10-shot) | 86.51 | | MMLU (5-shot) | 64.88| | TruthfulQA (0-shot) | 60.58 | | Winogrande (5-shot) | 81.37 | | GSM8K (5-shot) | 72.18 | # Acknowlegement - [mergekit](https://github.com/cg123/mergekit) - [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md) - [lm-evaluation-harness ](https://github.com/EleutherAI/lm-evaluation-harness) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__supermario-v2) | Metric |Value| |---------------------------------|----:| |Avg. |72.34| |AI2 Reasoning Challenge (25-Shot)|68.52| |HellaSwag (10-Shot) |86.51| |MMLU (5-Shot) |64.88| |TruthfulQA (0-shot) |60.58| |Winogrande (5-shot) |81.37| |GSM8k (5-shot) |72.18|
MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
MaziyarPanahi
2024-03-04T13:35:06Z
62
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-12T18:48:40Z
--- license: apache-2.0 library_name: transformers tags: - merge pipeline_tag: text-generation model-index: - name: TheTop-5x7B-Instruct-P-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 38.57 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 51.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.07 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 name: Open LLM Leaderboard --- Merge of top 7B models with PASS method > mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention. ## Eval ```python { "all": { "acc": 0.6152059168567449, "acc_stderr": 0.031951119145286845, "acc_norm": 0.6274010157580394, "acc_norm_stderr": 0.032831804892806175, "mc1": 0.25091799265605874, "mc1_stderr": 0.015176985027707694, "mc2": 0.5006656333594469, "mc2_stderr": 0.01636490303268174 }, "harness|arc:challenge|25": { "acc": 0.3447098976109215, "acc_stderr": 0.013888816286782112, "acc_norm": 0.3856655290102389, "acc_norm_stderr": 0.01422425097325717 }, "harness|hellaswag|10": { "acc": 0.34116709818761204, "acc_stderr": 0.004731324409133264, "acc_norm": 0.515435172276439, "acc_norm_stderr": 0.004987403268345035 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.04461960433384741, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5703703703703704, "acc_stderr": 0.042763494943765995, "acc_norm": 0.5703703703703704, "acc_norm_stderr": 0.042763494943765995 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7169811320754716, "acc_stderr": 0.027724236492700918, "acc_norm": 0.7169811320754716, "acc_norm_stderr": 0.027724236492700918 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.048786087144669955, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.048786087144669955 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.04292346959909283, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5446808510638298, "acc_stderr": 0.03255525359340355, "acc_norm": 0.5446808510638298, "acc_norm_stderr": 0.03255525359340355 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5172413793103449, "acc_stderr": 0.04164188720169375, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.04164188720169375 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42857142857142855, "acc_stderr": 0.025487187147859372, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.025487187147859372 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3968253968253968, "acc_stderr": 0.043758884927270605, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.043758884927270605 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7741935483870968, "acc_stderr": 0.023785577884181015, "acc_norm": 0.7741935483870968, "acc_norm_stderr": 0.023785577884181015 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.03317505930009181, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.03317505930009181 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7373737373737373, "acc_stderr": 0.03135305009533085, "acc_norm": 0.7373737373737373, "acc_norm_stderr": 0.03135305009533085 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8808290155440415, "acc_stderr": 0.023381935348121437, "acc_norm": 0.8808290155440415, "acc_norm_stderr": 0.023381935348121437 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.617948717948718, "acc_stderr": 0.024635549163908237, "acc_norm": 0.617948717948718, "acc_norm_stderr": 0.024635549163908237 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2777777777777778, "acc_stderr": 0.027309140588230203, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.027309140588230203 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6512605042016807, "acc_stderr": 0.030956636328566545, "acc_norm": 0.6512605042016807, "acc_norm_stderr": 0.030956636328566545 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.038227469376587525, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.038227469376587525 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8440366972477065, "acc_stderr": 0.015555802713590158, "acc_norm": 0.8440366972477065, "acc_norm_stderr": 0.015555802713590158 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4722222222222222, "acc_stderr": 0.0340470532865388, "acc_norm": 0.4722222222222222, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8431372549019608, "acc_stderr": 0.025524722324553346, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.025524722324553346 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.025530100460233497, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.025530100460233497 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7174887892376681, "acc_stderr": 0.03021683101150877, "acc_norm": 0.7174887892376681, "acc_norm_stderr": 0.03021683101150877 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.0364129708131373, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.0364129708131373 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8148148148148148, "acc_stderr": 0.03755265865037181, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.03755265865037181 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.031921934489347235, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.031921934489347235 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5446428571428571, "acc_stderr": 0.04726835553719097, "acc_norm": 0.5446428571428571, "acc_norm_stderr": 0.04726835553719097 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.036756688322331886, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.036756688322331886 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8290598290598291, "acc_stderr": 0.024662496845209804, "acc_norm": 0.8290598290598291, "acc_norm_stderr": 0.024662496845209804 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8250319284802043, "acc_stderr": 0.013586619219903324, "acc_norm": 0.8250319284802043, "acc_norm_stderr": 0.013586619219903324 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.023948512905468348, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.023948512905468348 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.36312849162011174, "acc_stderr": 0.016083749986853704, "acc_norm": 0.36312849162011174, "acc_norm_stderr": 0.016083749986853704 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7450980392156863, "acc_stderr": 0.02495418432487991, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.02495418432487991 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.02549425935069491, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.02549425935069491 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7530864197530864, "acc_stderr": 0.023993501709042114, "acc_norm": 0.7530864197530864, "acc_norm_stderr": 0.023993501709042114 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4367666232073012, "acc_stderr": 0.01266770191960366, "acc_norm": 0.4367666232073012, "acc_norm_stderr": 0.01266770191960366 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6176470588235294, "acc_stderr": 0.029520095697687765, "acc_norm": 0.6176470588235294, "acc_norm_stderr": 0.029520095697687765 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6699346405228758, "acc_stderr": 0.019023726160724553, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.019023726160724553 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128445, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128445 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.025538433368578334, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.025538433368578334 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-virology|5": { "acc": 0.5542168674698795, "acc_stderr": 0.03869543323472101, "acc_norm": 0.5542168674698795, "acc_norm_stderr": 0.03869543323472101 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.25091799265605874, "mc1_stderr": 0.015176985027707694, "mc2": 0.5006656333594469, "mc2_stderr": 0.01636490303268174 }, "harness|winogrande|5": { "acc": 0.7261247040252565, "acc_stderr": 0.012533292732620296 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-P-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |46.02| |AI2 Reasoning Challenge (25-Shot)|38.57| |HellaSwag (10-Shot) |51.54| |MMLU (5-Shot) |63.36| |TruthfulQA (0-shot) |50.07| |Winogrande (5-shot) |72.61| |GSM8k (5-shot) | 0.00|
MaziyarPanahi/WizardLM-Math-70B-v0.1
MaziyarPanahi
2024-03-04T13:34:47Z
110
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "wizardlm", "mique", "license:agpl-3.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-14T00:03:45Z
--- license: agpl-3.0 tags: - merge - wizardlm - mique pipeline_tag: text-generation model-index: - name: WizardLM-Math-70B-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.01 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.07 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 64.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1 name: Open LLM Leaderboard --- ## Examples There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step. ``` Step 1: Identify the initial situation There are three killers in a room, which we can label as A, B, and C. Step 2: Describe the event Someone (let's call this person D) enters the room and kills one of the killers. For the sake of simplicity, let's say that Killer A is killed. Step 3: Analyze the new situation After the event, there are now two killers left in the room: B and C. The person who entered the room, D, is also still present. However, since D did not leave the room after killing A, it means that D is also a killer. Therefore, there are actually three killers in the room: B, C, and D. Final Answer: After the event, there are three killers left in the room: B, C, and D. ``` Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step ``` Step 1: Understand the given information We know that Mark is faster than Mary, and Mary is faster than Joe. This means that Mark is the fastest among the three, followed by Mary, and then Joe. Step 2: Analyze the question The question asks if Joe is faster than Mark. Based on the given information, we can conclude that Joe is not faster than Mark. In fact, Mark is faster than Joe. Final Answer: No, Joe is not faster than Mark. ``` If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously? ``` The time taken for a shirt to dry depends on several factors such as the material of the shirt, the humidity, temperature, wind speed, etc. However, if we assume that all four shirts have similar properties and the weather conditions remain constant, then you can expect the drying time to be proportional to the number of shirts. In other words, if two shirts take 5 hours to dry, then four shirts will likely take twice as long, or 10 hours. This is because there is more fabric to dry, which means it will absorb more water from the air, thus taking longer to reach equilibrium with the surrounding environment. However, this is just an estimate, and the actual drying time may vary depending on the specific circumstances. ``` Wrong, it takes the same amount of time! explain step by step 25-4*2+3=? ``` Step 1: Perform the multiplication operation first. 25 - 4 * 2 + 3 = 25 - 8 + 3 Step 2: Now, perform the subtraction and addition operations from left to right. 25 - 8 + 3 = 17 + 3 Step 3: Finally, perform the remaining addition operation. 17 + 3 = 20 So, 25-4*2+3 equals 20. ``` ## Eval ### WizardLM/WizardLM-70B-V1.0 vs. MaziyarPanahi/WizardLM-Math-70B-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/7NWjisnm1gfZN2cMg4c5X.png) ### Leaderboard ```python { "all": { "acc": 0.6914116069568377, "acc_stderr": 0.03063431437342948, "acc_norm": 0.6938613221179539, "acc_norm_stderr": 0.031238741076549784, "mc1": 0.40269277845777235, "mc1_stderr": 0.01716883093518722, "mc2": 0.5707095526544473, "mc2_stderr": 0.01525040450448649 }, "harness|arc:challenge|25": { "acc": 0.6322525597269625, "acc_stderr": 0.014090995618168482, "acc_norm": 0.6706484641638225, "acc_norm_stderr": 0.013734057652635474 }, "harness|hellaswag|10": { "acc": 0.6746664011153157, "acc_stderr": 0.0046754187743142306, "acc_norm": 0.8600876319458275, "acc_norm_stderr": 0.0034618713240671846 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6518518518518519, "acc_stderr": 0.041153246103369526, "acc_norm": 0.6518518518518519, "acc_norm_stderr": 0.041153246103369526 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7894736842105263, "acc_stderr": 0.03317672787533157, "acc_norm": 0.7894736842105263, "acc_norm_stderr": 0.03317672787533157 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.73, "acc_stderr": 0.04461960433384741, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7283018867924528, "acc_stderr": 0.027377706624670713, "acc_norm": 0.7283018867924528, "acc_norm_stderr": 0.027377706624670713 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8194444444444444, "acc_stderr": 0.032166008088022675, "acc_norm": 0.8194444444444444, "acc_norm_stderr": 0.032166008088022675 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6878612716763006, "acc_stderr": 0.035331333893236574, "acc_norm": 0.6878612716763006, "acc_norm_stderr": 0.035331333893236574 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.35294117647058826, "acc_stderr": 0.047551296160629475, "acc_norm": 0.35294117647058826, "acc_norm_stderr": 0.047551296160629475 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.676595744680851, "acc_stderr": 0.030579442773610337, "acc_norm": 0.676595744680851, "acc_norm_stderr": 0.030579442773610337 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.046151869625837026, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.046151869625837026 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5793103448275863, "acc_stderr": 0.04113914981189261, "acc_norm": 0.5793103448275863, "acc_norm_stderr": 0.04113914981189261 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4497354497354497, "acc_stderr": 0.02562085704293665, "acc_norm": 0.4497354497354497, "acc_norm_stderr": 0.02562085704293665 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.46825396825396826, "acc_stderr": 0.04463112720677172, "acc_norm": 0.46825396825396826, "acc_norm_stderr": 0.04463112720677172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8129032258064516, "acc_stderr": 0.022185710092252252, "acc_norm": 0.8129032258064516, "acc_norm_stderr": 0.022185710092252252 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5369458128078818, "acc_stderr": 0.035083705204426656, "acc_norm": 0.5369458128078818, "acc_norm_stderr": 0.035083705204426656 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8363636363636363, "acc_stderr": 0.02888787239548795, "acc_norm": 0.8363636363636363, "acc_norm_stderr": 0.02888787239548795 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822502, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822502 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.927461139896373, "acc_stderr": 0.018718998520678178, "acc_norm": 0.927461139896373, "acc_norm_stderr": 0.018718998520678178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7025641025641025, "acc_stderr": 0.023177408131465953, "acc_norm": 0.7025641025641025, "acc_norm_stderr": 0.023177408131465953 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.02904560029061626, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.02904560029061626 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7941176470588235, "acc_stderr": 0.02626502460827588, "acc_norm": 0.7941176470588235, "acc_norm_stderr": 0.02626502460827588 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4503311258278146, "acc_stderr": 0.04062290018683776, "acc_norm": 0.4503311258278146, "acc_norm_stderr": 0.04062290018683776 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8954128440366973, "acc_stderr": 0.013120530245265593, "acc_norm": 0.8954128440366973, "acc_norm_stderr": 0.013120530245265593 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.03367462138896078, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.03367462138896078 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9166666666666666, "acc_stderr": 0.019398452135813905, "acc_norm": 0.9166666666666666, "acc_norm_stderr": 0.019398452135813905 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8860759493670886, "acc_stderr": 0.020681745135884565, "acc_norm": 0.8860759493670886, "acc_norm_stderr": 0.020681745135884565 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.757847533632287, "acc_stderr": 0.028751392398694755, "acc_norm": 0.757847533632287, "acc_norm_stderr": 0.028751392398694755 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8702290076335878, "acc_stderr": 0.029473649496907065, "acc_norm": 0.8702290076335878, "acc_norm_stderr": 0.029473649496907065 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8181818181818182, "acc_stderr": 0.03520893951097655, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.03520893951097655 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8148148148148148, "acc_stderr": 0.03755265865037181, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.03755265865037181 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7791411042944786, "acc_stderr": 0.03259177392742179, "acc_norm": 0.7791411042944786, "acc_norm_stderr": 0.03259177392742179 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8446601941747572, "acc_stderr": 0.03586594738573974, "acc_norm": 0.8446601941747572, "acc_norm_stderr": 0.03586594738573974 }, "harness|hendrycksTest-marketing|5": { "acc": 0.905982905982906, "acc_stderr": 0.019119892798924974, "acc_norm": 0.905982905982906, "acc_norm_stderr": 0.019119892798924974 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.67, "acc_stderr": 0.047258156262526066, "acc_norm": 0.67, "acc_norm_stderr": 0.047258156262526066 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8697318007662835, "acc_stderr": 0.012036729568216054, "acc_norm": 0.8697318007662835, "acc_norm_stderr": 0.012036729568216054 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7774566473988439, "acc_stderr": 0.02239421566194282, "acc_norm": 0.7774566473988439, "acc_norm_stderr": 0.02239421566194282 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5553072625698324, "acc_stderr": 0.016619881988177012, "acc_norm": 0.5553072625698324, "acc_norm_stderr": 0.016619881988177012 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7516339869281046, "acc_stderr": 0.024739981355113592, "acc_norm": 0.7516339869281046, "acc_norm_stderr": 0.024739981355113592 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.77491961414791, "acc_stderr": 0.023720088516179027, "acc_norm": 0.77491961414791, "acc_norm_stderr": 0.023720088516179027 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7962962962962963, "acc_stderr": 0.02240967454730417, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.02240967454730417 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5390070921985816, "acc_stderr": 0.029736592526424445, "acc_norm": 0.5390070921985816, "acc_norm_stderr": 0.029736592526424445 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5586701434159062, "acc_stderr": 0.012682016335646683, "acc_norm": 0.5586701434159062, "acc_norm_stderr": 0.012682016335646683 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7242647058823529, "acc_stderr": 0.027146271936625162, "acc_norm": 0.7242647058823529, "acc_norm_stderr": 0.027146271936625162 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.761437908496732, "acc_stderr": 0.017242385828779627, "acc_norm": 0.761437908496732, "acc_norm_stderr": 0.017242385828779627 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7454545454545455, "acc_stderr": 0.041723430387053825, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.041723430387053825 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7877551020408163, "acc_stderr": 0.026176967197866767, "acc_norm": 0.7877551020408163, "acc_norm_stderr": 0.026176967197866767 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8805970149253731, "acc_stderr": 0.02292879327721974, "acc_norm": 0.8805970149253731, "acc_norm_stderr": 0.02292879327721974 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.9, "acc_stderr": 0.030151134457776334, "acc_norm": 0.9, "acc_norm_stderr": 0.030151134457776334 }, "harness|hendrycksTest-virology|5": { "acc": 0.5602409638554217, "acc_stderr": 0.03864139923699122, "acc_norm": 0.5602409638554217, "acc_norm_stderr": 0.03864139923699122 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8596491228070176, "acc_stderr": 0.0266405825391332, "acc_norm": 0.8596491228070176, "acc_norm_stderr": 0.0266405825391332 }, "harness|truthfulqa:mc|0": { "mc1": 0.40269277845777235, "mc1_stderr": 0.01716883093518722, "mc2": 0.5707095526544473, "mc2_stderr": 0.01525040450448649 }, "harness|winogrande|5": { "acc": 0.8176795580110497, "acc_stderr": 0.010851565594267207 }, "harness|gsm8k|5": { "acc": 0.6444275966641395, "acc_stderr": 0.013185402252713852 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__WizardLM-Math-70B-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |70.92| |AI2 Reasoning Challenge (25-Shot)|67.06| |HellaSwag (10-Shot) |86.01| |MMLU (5-Shot) |69.14| |TruthfulQA (0-shot) |57.07| |Winogrande (5-shot) |81.77| |GSM8k (5-shot) |64.44|
MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1
MaziyarPanahi
2024-03-04T13:34:20Z
68
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-12T21:58:43Z
--- license: apache-2.0 library_name: transformers tags: - merge pipeline_tag: text-generation model-index: - name: TheTop-5x7B-Instruct-S4-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.18 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.56 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 73.39 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S4-v0.1 name: Open LLM Leaderboard --- Merge of top 7B models and the SLERP of other 7B models > mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention. > > ## Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/3a2An3rpaLMusQrtQ74Up.png) ```python { "all": { "acc": 0.6568351479800627, "acc_stderr": 0.03199600851869088, "acc_norm": 0.6554901222242155, "acc_norm_stderr": 0.03267670432184765, "mc1": 0.5104039167686658, "mc1_stderr": 0.017499711430249268, "mc2": 0.6556430108444109, "mc2_stderr": 0.015519025079862213 }, "harness|arc:challenge|25": { "acc": 0.6919795221843004, "acc_stderr": 0.013491429517292038, "acc_norm": 0.7218430034129693, "acc_norm_stderr": 0.013094469919538812 }, "harness|hellaswag|10": { "acc": 0.7202748456482773, "acc_stderr": 0.0044794676194648, "acc_norm": 0.8828918542123083, "acc_norm_stderr": 0.003208919510309931 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6518518518518519, "acc_stderr": 0.041153246103369526, "acc_norm": 0.6518518518518519, "acc_norm_stderr": 0.041153246103369526 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695238, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695238 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6981132075471698, "acc_stderr": 0.02825420034443866, "acc_norm": 0.6981132075471698, "acc_norm_stderr": 0.02825420034443866 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.27, "acc_stderr": 0.044619604333847394, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.04878608714466996, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.04878608714466996 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5914893617021276, "acc_stderr": 0.032134180267015755, "acc_norm": 0.5914893617021276, "acc_norm_stderr": 0.032134180267015755 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.04122737111370332, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.04122737111370332 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42592592592592593, "acc_stderr": 0.02546714904546955, "acc_norm": 0.42592592592592593, "acc_norm_stderr": 0.02546714904546955 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.49206349206349204, "acc_stderr": 0.044715725362943486, "acc_norm": 0.49206349206349204, "acc_norm_stderr": 0.044715725362943486 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083525, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267045, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267045 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9067357512953368, "acc_stderr": 0.020986854593289733, "acc_norm": 0.9067357512953368, "acc_norm_stderr": 0.020986854593289733 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402534, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402534 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.02904560029061625, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.02904560029061625 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6764705882352942, "acc_stderr": 0.030388353551886793, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.030388353551886793 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242742, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242742 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8366972477064221, "acc_stderr": 0.01584825580650155, "acc_norm": 0.8366972477064221, "acc_norm_stderr": 0.01584825580650155 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8529411764705882, "acc_stderr": 0.024857478080250447, "acc_norm": 0.8529411764705882, "acc_norm_stderr": 0.024857478080250447 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8143459915611815, "acc_stderr": 0.025310495376944856, "acc_norm": 0.8143459915611815, "acc_norm_stderr": 0.025310495376944856 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0395783547198098, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406974, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406974 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.0446196043338474, "acc_norm": 0.73, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8275862068965517, "acc_stderr": 0.013507943909371802, "acc_norm": 0.8275862068965517, "acc_norm_stderr": 0.013507943909371802 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7543352601156069, "acc_stderr": 0.023176298203992005, "acc_norm": 0.7543352601156069, "acc_norm_stderr": 0.023176298203992005 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.45027932960893857, "acc_stderr": 0.01663961523684581, "acc_norm": 0.45027932960893857, "acc_norm_stderr": 0.01663961523684581 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7254901960784313, "acc_stderr": 0.02555316999182652, "acc_norm": 0.7254901960784313, "acc_norm_stderr": 0.02555316999182652 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7138263665594855, "acc_stderr": 0.025670259242188933, "acc_norm": 0.7138263665594855, "acc_norm_stderr": 0.025670259242188933 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7561728395061729, "acc_stderr": 0.02389187954195961, "acc_norm": 0.7561728395061729, "acc_norm_stderr": 0.02389187954195961 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46808510638297873, "acc_stderr": 0.029766675075873866, "acc_norm": 0.46808510638297873, "acc_norm_stderr": 0.029766675075873866 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4745762711864407, "acc_stderr": 0.012753716929101004, "acc_norm": 0.4745762711864407, "acc_norm_stderr": 0.012753716929101004 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6911764705882353, "acc_stderr": 0.02806499816704009, "acc_norm": 0.6911764705882353, "acc_norm_stderr": 0.02806499816704009 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6748366013071896, "acc_stderr": 0.01895088677080631, "acc_norm": 0.6748366013071896, "acc_norm_stderr": 0.01895088677080631 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784603, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784603 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421603, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421603 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-virology|5": { "acc": 0.5602409638554217, "acc_stderr": 0.03864139923699122, "acc_norm": 0.5602409638554217, "acc_norm_stderr": 0.03864139923699122 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.5104039167686658, "mc1_stderr": 0.017499711430249268, "mc2": 0.6556430108444109, "mc2_stderr": 0.015519025079862213 }, "harness|winogrande|5": { "acc": 0.8516179952644041, "acc_stderr": 0.009990706005184136 }, "harness|gsm8k|5": { "acc": 0.7338893100833965, "acc_stderr": 0.012172750939040328 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S4-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |74.94| |AI2 Reasoning Challenge (25-Shot)|72.18| |HellaSwag (10-Shot) |88.29| |MMLU (5-Shot) |65.03| |TruthfulQA (0-shot) |65.56| |Winogrande (5-shot) |85.16| |GSM8k (5-shot) |73.39|
bigcode/starcoder2-3b
bigcode
2024-03-04T13:33:12Z
708,535
161
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "code", "dataset:bigcode/the-stack-v2-train", "arxiv:2305.13245", "arxiv:2205.14135", "arxiv:2004.05150", "arxiv:2207.14255", "arxiv:2402.19173", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-29T15:22:51Z
--- pipeline_tag: text-generation inference: true widget: - text: 'def print_hello_world():' example_title: Hello world group: Python datasets: - bigcode/the-stack-v2-train license: bigcode-openrail-m library_name: transformers tags: - code model-index: - name: starcoder2-3b results: - task: type: text-generation dataset: name: CruxEval-I type: cruxeval-i metrics: - type: pass@1 value: 32.7 - task: type: text-generation dataset: name: DS-1000 type: ds-1000 metrics: - type: pass@1 value: 25.0 - task: type: text-generation dataset: name: GSM8K (PAL) type: gsm8k-pal metrics: - type: accuracy value: 27.7 - task: type: text-generation dataset: name: HumanEval+ type: humanevalplus metrics: - type: pass@1 value: 27.4 - task: type: text-generation dataset: name: HumanEval type: humaneval metrics: - type: pass@1 value: 31.7 - task: type: text-generation dataset: name: RepoBench-v1.1 type: repobench-v1.1 metrics: - type: edit-smiliarity value: 71.19 --- # StarCoder2 <center> <img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/starcoder2_banner.png" alt="SC2" width="900" height="600"> </center> ## Table of Contents 1. [Model Summary](##model-summary) 2. [Use](##use) 3. [Limitations](##limitations) 4. [Training](##training) 5. [License](##license) 6. [Citation](##citation) ## Model Summary StarCoder2-3B model is a 3B parameter model trained on 17 programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 3+ trillion tokens. - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org) - **Paper:** [Link](https://huggingface.co/papers/2402.19173) - **Point of Contact:** [[email protected]](mailto:[email protected]) - **Languages:** 17 Programming languages ## Use ### Intended use The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. ### Generation Here are some examples to get started with the model. You can find a script for fine-tuning in StarCoder2's [GitHub repository](https://github.com/bigcode-project/starcoder2). First, make sure to install `transformers` from source: ```bash pip install git+https://github.com/huggingface/transformers.git ``` #### Running the model on CPU/GPU/multi GPU * _Using full precision_ ```python # pip install git+https://github.com/huggingface/transformers.git # TODO: merge PR to main from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/starcoder2-3b" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 12624.81 MB ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM checkpoint = "bigcode/starcoder2-3b" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for fp16 use `torch_dtype=torch.float16` instead model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 6312.41 MB ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # to use 4bit use `load_in_4bit=True` instead quantization_config = BitsAndBytesConfig(load_in_8bit=True) checkpoint = "bigcode/starcoder2-3b" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") # load_in_8bit Memory footprint: 3434.07 MB # load_in_4bit >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 1994.90 MB ``` ### Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that lets you search through the pretraining data to identify where the generated code came from, and apply the proper attribution to your code. # Limitations The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations. # Training ## Model - **Architecture:** Transformer decoder with grouped-query and sliding window attention and Fill-in-the-Middle objective - **Pretraining steps:** 1.2 million - **Pretraining tokens:** 3+ trillion - **Precision:** bfloat16 ## Hardware - **GPUs:** 160 A100 ## Software - **Framework:** TODO - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) # License The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement). # Citation ```bash @misc{lozhkov2024starcoder, title={StarCoder 2 and The Stack v2: The Next Generation}, author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries}, year={2024}, eprint={2402.19173}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
abrsh2121/DialoGPT-small-joshua
abrsh2121
2024-03-04T13:32:11Z
3
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T13:31:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hiyouga/Qwen-14B-Chat-LLaMAfied
hiyouga
2024-03-04T13:29:19Z
1,550
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "qwen", "conversational", "en", "zh", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-25T11:29:27Z
--- language: - en - zh license: other library_name: transformers tags: - llama - qwen license_name: qwen license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT pipeline_tag: text-generation inference: false model-index: - name: Qwen-14B-Chat-LLaMAfied results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hiyouga/Qwen-14B-Chat-LLaMAfied name: Open LLM Leaderboard --- This is the LLaMAfied version of [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) model by Alibaba Cloud. This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py The tokenizer is borrowed from https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory - **Developed by:** Alibaba Cloud. - **Language(s) (NLP):** Chinese/English - **License:** [Tongyi Qianwen License](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) Usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer tokenizer = AutoTokenizer.from_pretrained("hiyouga/Qwen-14B-Chat-LLaMAfied") model = AutoModelForCausalLM.from_pretrained("hiyouga/Qwen-14B-Chat-LLaMAfied", torch_dtype="auto", device_map="auto") streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) messages = [ {"role": "user", "content": "Who are you?"} ] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") inputs = inputs.to("cuda") generate_ids = model.generate(inputs, streamer=streamer) ``` You could also alternatively launch a CLI demo by using the script in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) ```bash python src/cli_demo.py --template qwen --model_name_or_path hiyouga/Qwen-14B-Chat-LLaMAfied ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Qwen-14B-Chat-LLaMAfied) | Metric |Value| |---------------------------------|----:| |Avg. |61.60| |AI2 Reasoning Challenge (25-Shot)|57.51| |HellaSwag (10-Shot) |82.11| |MMLU (5-Shot) |65.57| |TruthfulQA (0-shot) |51.99| |Winogrande (5-shot) |72.93| |GSM8k (5-shot) |39.50|
ninja21/q-Taxi-v3
ninja21
2024-03-04T13:22:35Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T13:22:33Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="ninja21/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
ImperialIndians23/RobertaBaseUnprocessedAugmentedBoth
ImperialIndians23
2024-03-04T13:20:24Z
6
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-04T13:06:29Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: RobertaBaseUnprocessedAugmentedBoth results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RobertaBaseUnprocessedAugmentedBoth This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3811 - Accuracy: 0.9260 - F1: 0.5634 - Precision: 0.6410 - Recall: 0.5025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1852 | 1.0 | 1344 | 0.3281 | 0.9202 | 0.4630 | 0.6429 | 0.3618 | | 0.1599 | 2.0 | 2689 | 0.3138 | 0.9226 | 0.5120 | 0.6391 | 0.4271 | | 0.0474 | 3.0 | 4032 | 0.3811 | 0.9260 | 0.5634 | 0.6410 | 0.5025 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Weni/ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0
Weni
2024-03-04T13:19:40Z
0
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-04T11:35:09Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-3.3.21-Mistral-7b-Multilanguage-3.2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5854 | 0.13 | 100 | 0.5553 | | 0.5226 | 0.27 | 200 | 0.5098 | | 0.5001 | 0.4 | 300 | 0.4974 | | 0.4918 | 0.53 | 400 | 0.4868 | | 0.4844 | 0.67 | 500 | 0.4795 | | 0.4759 | 0.8 | 600 | 0.4751 | | 0.4773 | 0.94 | 700 | 0.4733 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Unbabel/TowerBase-13B-v0.1
Unbabel
2024-03-04T13:18:13Z
276
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "translation", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "arxiv:2402.17733", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-01-26T17:43:54Z
--- license: cc-by-nc-4.0 language: - en - de - fr - zh - pt - nl - ru - ko - it - es metrics: - comet pipeline_tag: translation --- # Model Card for TowerBase-13B-v0.1 ## Model Details ### Model Description TowerBase-13B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-13B-v0.1 is the first model in the series. The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1). We will release more details in the upcoming technical report. - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 13B parameter model built on top of Llama 2 by continuing pretraining on multilingual data. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. ## Intended uses & limitations The model is intended for research purposes in the 10 languages it supports. The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime. It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1), as well as other multilingual tasks. ### Out-of-Scope Use The model is not guaranteed to perform well for languages other than the 10 languages it supports. ## Bias, Risks, and Limitations TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "Unbabel/TowerBase-13B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "English: My name is TowerBase.\nPortuguese:" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Training Data Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)). ## Citation ```bibtex @misc{tower_llm_2024, title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks}, author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins}, year={2024}, eprint={2402.17733}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Vivendo/vixtral-7b-beta
Vivendo
2024-03-04T13:11:15Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T10:17:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ronibandini/reggaetonbegone
ronibandini
2024-03-04T13:10:39Z
0
0
null
[ "music", "reggaeton", "Edge Impulse", "license:cc-by-4.0", "region:us" ]
null
2024-03-03T19:44:43Z
--- license: cc-by-4.0 tags: - music - reggaeton - Edge Impulse --- Simple Reggaeton genre classification model trained with Edge Impulse, WAV, 16KHZ, mono. Labels: reggaeton, otros Attention: only 6 songs were used for this test, so you should train a new model with more songs for increased accuracy. For otros (others) label use your ambient sounds and other music genres. Source code https://github.com/ronibandini/reggaetonBeGone Tutorial https://www.hackster.io/roni-bandini/reggaeton-be-gone-e5b6e2
nbeerbower/SuperBruphin-3x7B
nbeerbower
2024-03-04T13:08:03Z
53
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "moe", "conversational", "base_model:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:finetune:PetroGPT/WestSeverus-7B-DPO-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T04:10:30Z
--- license: apache-2.0 tags: - mergekit - moe base_model: - nbeerbower/bruphin-epsilon - FelixChao/WestSeverus-7B-DPO-v2 - jondurbin/airoboros-m-7b-3.1.2 model-index: - name: SuperBruphin-3x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.16 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.85 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B name: Open LLM Leaderboard --- # SuperBruphin-3x7B This is an experimental MoE model created using [mergekit](https://github.com/cg123/mergekit). (mixtral branch) ### Models Merged The following models were included in the merge: * [nbeerbower/bruphin-epsilon](https://huggingface.co/nbeerbower/bruphin-epsilon) * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) * [jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: nbeerbower/bruphin-epsilon gate_mode: hidden dtype: bfloat16 experts: - source_model: nbeerbower/bruphin-epsilon positive_prompts: - "Tell a story." - source_model: FelixChao/WestSeverus-7B-DPO-v2 positive_prompts: - "Solve this problem." - source_model: jondurbin/airoboros-m-7b-3.1.2 positive_prompts: - "Write a letter." ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__SuperBruphin-3x7B) | Metric |Value| |---------------------------------|----:| |Avg. |73.75| |AI2 Reasoning Challenge (25-Shot)|71.16| |HellaSwag (10-Shot) |87.74| |MMLU (5-Shot) |64.58| |TruthfulQA (0-shot) |66.85| |Winogrande (5-shot) |81.53| |GSM8k (5-shot) |70.66|
farid1088/Legal_GQA_BERT1000
farid1088
2024-03-04T13:04:25Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-03-04T10:50:03Z
--- tags: - generated_from_trainer model-index: - name: Legal_GQA_BERT1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Legal_GQA_BERT1000 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.1875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 1.0 | 5 | 3.9743 | | No log | 2.0 | 10 | 3.5372 | | No log | 3.0 | 15 | 3.4254 | | No log | 4.0 | 20 | 3.5207 | | No log | 5.0 | 25 | 3.5558 | | No log | 6.0 | 30 | 3.9697 | | No log | 7.0 | 35 | 4.2918 | | No log | 8.0 | 40 | 4.6181 | | No log | 9.0 | 45 | 4.7107 | | No log | 10.0 | 50 | 4.8748 | | No log | 11.0 | 55 | 5.2327 | | No log | 12.0 | 60 | 5.6292 | | No log | 13.0 | 65 | 5.8671 | | No log | 14.0 | 70 | 5.6838 | | No log | 15.0 | 75 | 5.5434 | | No log | 16.0 | 80 | 5.9402 | | No log | 17.0 | 85 | 5.8799 | | No log | 18.0 | 90 | 6.4195 | | No log | 19.0 | 95 | 6.4886 | | No log | 20.0 | 100 | 6.3513 | | No log | 21.0 | 105 | 6.4612 | | No log | 22.0 | 110 | 6.4754 | | No log | 23.0 | 115 | 6.4456 | | No log | 24.0 | 120 | 7.0586 | | No log | 25.0 | 125 | 7.0618 | | No log | 26.0 | 130 | 6.9459 | | No log | 27.0 | 135 | 6.8421 | | No log | 28.0 | 140 | 6.5651 | | No log | 29.0 | 145 | 6.9905 | | No log | 30.0 | 150 | 6.3596 | | No log | 31.0 | 155 | 7.0807 | | No log | 32.0 | 160 | 7.1982 | | No log | 33.0 | 165 | 7.3955 | | No log | 34.0 | 170 | 7.4059 | | No log | 35.0 | 175 | 7.4678 | | No log | 36.0 | 180 | 7.6056 | | No log | 37.0 | 185 | 6.9081 | | No log | 38.0 | 190 | 7.6481 | | No log | 39.0 | 195 | 7.2091 | | No log | 40.0 | 200 | 7.0908 | | No log | 41.0 | 205 | 7.5022 | | No log | 42.0 | 210 | 6.9420 | | No log | 43.0 | 215 | 7.4599 | | No log | 44.0 | 220 | 7.3581 | | No log | 45.0 | 225 | 7.2576 | | No log | 46.0 | 230 | 7.5230 | | No log | 47.0 | 235 | 7.5638 | | No log | 48.0 | 240 | 7.5331 | | No log | 49.0 | 245 | 6.9412 | | No log | 50.0 | 250 | 7.2134 | | No log | 51.0 | 255 | 7.6715 | | No log | 52.0 | 260 | 7.3006 | | No log | 53.0 | 265 | 7.2024 | | No log | 54.0 | 270 | 7.5210 | | No log | 55.0 | 275 | 8.0935 | | No log | 56.0 | 280 | 7.9519 | | No log | 57.0 | 285 | 7.4438 | | No log | 58.0 | 290 | 7.2827 | | No log | 59.0 | 295 | 7.0894 | | No log | 60.0 | 300 | 7.6032 | | No log | 61.0 | 305 | 8.0517 | | No log | 62.0 | 310 | 7.5937 | | No log | 63.0 | 315 | 6.9395 | | No log | 64.0 | 320 | 7.0522 | | No log | 65.0 | 325 | 7.5948 | | No log | 66.0 | 330 | 7.9039 | | No log | 67.0 | 335 | 8.2209 | | No log | 68.0 | 340 | 7.7581 | | No log | 69.0 | 345 | 8.1243 | | No log | 70.0 | 350 | 8.0726 | | No log | 71.0 | 355 | 7.6872 | | No log | 72.0 | 360 | 7.0063 | | No log | 73.0 | 365 | 6.5753 | | No log | 74.0 | 370 | 7.6136 | | No log | 75.0 | 375 | 7.9254 | | No log | 76.0 | 380 | 7.7021 | | No log | 77.0 | 385 | 7.5406 | | No log | 78.0 | 390 | 7.5491 | | No log | 79.0 | 395 | 7.4915 | | No log | 80.0 | 400 | 7.5097 | | No log | 81.0 | 405 | 7.9212 | | No log | 82.0 | 410 | 7.9283 | | No log | 83.0 | 415 | 8.0748 | | No log | 84.0 | 420 | 7.7280 | | No log | 85.0 | 425 | 7.2118 | | No log | 86.0 | 430 | 6.8779 | | No log | 87.0 | 435 | 6.9110 | | No log | 88.0 | 440 | 7.0416 | | No log | 89.0 | 445 | 7.3057 | | No log | 90.0 | 450 | 7.6491 | | No log | 91.0 | 455 | 7.5459 | | No log | 92.0 | 460 | 7.3228 | | No log | 93.0 | 465 | 7.2695 | | No log | 94.0 | 470 | 7.8725 | | No log | 95.0 | 475 | 7.7156 | | No log | 96.0 | 480 | 7.5736 | | No log | 97.0 | 485 | 7.3761 | | No log | 98.0 | 490 | 7.2304 | | No log | 99.0 | 495 | 7.1071 | | 0.2269 | 100.0 | 500 | 6.9306 | | 0.2269 | 101.0 | 505 | 7.2086 | | 0.2269 | 102.0 | 510 | 7.6169 | | 0.2269 | 103.0 | 515 | 8.1745 | | 0.2269 | 104.0 | 520 | 8.0064 | | 0.2269 | 105.0 | 525 | 7.7911 | | 0.2269 | 106.0 | 530 | 7.4022 | | 0.2269 | 107.0 | 535 | 7.0566 | | 0.2269 | 108.0 | 540 | 7.2674 | | 0.2269 | 109.0 | 545 | 7.7018 | | 0.2269 | 110.0 | 550 | 7.8705 | | 0.2269 | 111.0 | 555 | 7.8524 | | 0.2269 | 112.0 | 560 | 7.3932 | | 0.2269 | 113.0 | 565 | 6.8269 | | 0.2269 | 114.0 | 570 | 7.2616 | | 0.2269 | 115.0 | 575 | 7.6497 | | 0.2269 | 116.0 | 580 | 7.7979 | | 0.2269 | 117.0 | 585 | 7.4772 | | 0.2269 | 118.0 | 590 | 7.1395 | | 0.2269 | 119.0 | 595 | 7.3239 | | 0.2269 | 120.0 | 600 | 7.2565 | | 0.2269 | 121.0 | 605 | 7.3059 | | 0.2269 | 122.0 | 610 | 7.3107 | | 0.2269 | 123.0 | 615 | 7.2712 | | 0.2269 | 124.0 | 620 | 7.7671 | | 0.2269 | 125.0 | 625 | 7.7734 | | 0.2269 | 126.0 | 630 | 7.6092 | | 0.2269 | 127.0 | 635 | 7.3898 | | 0.2269 | 128.0 | 640 | 7.4747 | | 0.2269 | 129.0 | 645 | 7.6096 | | 0.2269 | 130.0 | 650 | 7.7822 | | 0.2269 | 131.0 | 655 | 8.0582 | | 0.2269 | 132.0 | 660 | 8.0955 | | 0.2269 | 133.0 | 665 | 8.0012 | | 0.2269 | 134.0 | 670 | 7.6445 | | 0.2269 | 135.0 | 675 | 7.8296 | | 0.2269 | 136.0 | 680 | 7.9571 | | 0.2269 | 137.0 | 685 | 7.8609 | | 0.2269 | 138.0 | 690 | 7.1324 | | 0.2269 | 139.0 | 695 | 7.1330 | | 0.2269 | 140.0 | 700 | 7.4522 | | 0.2269 | 141.0 | 705 | 7.6582 | | 0.2269 | 142.0 | 710 | 7.7165 | | 0.2269 | 143.0 | 715 | 7.6934 | | 0.2269 | 144.0 | 720 | 7.6916 | | 0.2269 | 145.0 | 725 | 7.6940 | | 0.2269 | 146.0 | 730 | 7.6455 | | 0.2269 | 147.0 | 735 | 7.5148 | | 0.2269 | 148.0 | 740 | 7.1121 | | 0.2269 | 149.0 | 745 | 7.5019 | | 0.2269 | 150.0 | 750 | 7.6891 | | 0.2269 | 151.0 | 755 | 7.2418 | | 0.2269 | 152.0 | 760 | 6.8392 | | 0.2269 | 153.0 | 765 | 6.9463 | | 0.2269 | 154.0 | 770 | 7.8955 | | 0.2269 | 155.0 | 775 | 8.4044 | | 0.2269 | 156.0 | 780 | 8.3787 | | 0.2269 | 157.0 | 785 | 7.9366 | | 0.2269 | 158.0 | 790 | 7.7610 | | 0.2269 | 159.0 | 795 | 7.6725 | | 0.2269 | 160.0 | 800 | 7.6167 | | 0.2269 | 161.0 | 805 | 7.7071 | | 0.2269 | 162.0 | 810 | 7.8047 | | 0.2269 | 163.0 | 815 | 8.2038 | | 0.2269 | 164.0 | 820 | 8.4593 | | 0.2269 | 165.0 | 825 | 8.1716 | | 0.2269 | 166.0 | 830 | 7.7754 | | 0.2269 | 167.0 | 835 | 7.5142 | | 0.2269 | 168.0 | 840 | 7.3776 | | 0.2269 | 169.0 | 845 | 7.2984 | | 0.2269 | 170.0 | 850 | 7.2359 | | 0.2269 | 171.0 | 855 | 7.3082 | | 0.2269 | 172.0 | 860 | 7.5800 | | 0.2269 | 173.0 | 865 | 7.7336 | | 0.2269 | 174.0 | 870 | 7.9577 | | 0.2269 | 175.0 | 875 | 7.9004 | | 0.2269 | 176.0 | 880 | 7.7436 | | 0.2269 | 177.0 | 885 | 7.6707 | | 0.2269 | 178.0 | 890 | 7.6256 | | 0.2269 | 179.0 | 895 | 7.4739 | | 0.2269 | 180.0 | 900 | 7.2640 | | 0.2269 | 181.0 | 905 | 7.1892 | | 0.2269 | 182.0 | 910 | 7.2558 | | 0.2269 | 183.0 | 915 | 7.1922 | | 0.2269 | 184.0 | 920 | 7.2135 | | 0.2269 | 185.0 | 925 | 7.2850 | | 0.2269 | 186.0 | 930 | 7.3150 | | 0.2269 | 187.0 | 935 | 7.3337 | | 0.2269 | 188.0 | 940 | 7.5405 | | 0.2269 | 189.0 | 945 | 7.5892 | | 0.2269 | 190.0 | 950 | 7.3344 | | 0.2269 | 191.0 | 955 | 7.1228 | | 0.2269 | 192.0 | 960 | 7.2493 | | 0.2269 | 193.0 | 965 | 7.5115 | | 0.2269 | 194.0 | 970 | 7.4463 | | 0.2269 | 195.0 | 975 | 7.3382 | | 0.2269 | 196.0 | 980 | 7.2726 | | 0.2269 | 197.0 | 985 | 7.2558 | | 0.2269 | 198.0 | 990 | 7.2353 | | 0.2269 | 199.0 | 995 | 7.2022 | | 0.0008 | 200.0 | 1000 | 7.1997 | | 0.0008 | 201.0 | 1005 | 7.2157 | | 0.0008 | 202.0 | 1010 | 7.2645 | | 0.0008 | 203.0 | 1015 | 7.1960 | | 0.0008 | 204.0 | 1020 | 7.1602 | | 0.0008 | 205.0 | 1025 | 7.1455 | | 0.0008 | 206.0 | 1030 | 7.1423 | | 0.0008 | 207.0 | 1035 | 7.1524 | | 0.0008 | 208.0 | 1040 | 7.1748 | | 0.0008 | 209.0 | 1045 | 7.1960 | | 0.0008 | 210.0 | 1050 | 7.2078 | | 0.0008 | 211.0 | 1055 | 7.1741 | | 0.0008 | 212.0 | 1060 | 7.9340 | | 0.0008 | 213.0 | 1065 | 8.2643 | | 0.0008 | 214.0 | 1070 | 8.1466 | | 0.0008 | 215.0 | 1075 | 7.6816 | | 0.0008 | 216.0 | 1080 | 8.5650 | | 0.0008 | 217.0 | 1085 | 8.8665 | | 0.0008 | 218.0 | 1090 | 8.0790 | | 0.0008 | 219.0 | 1095 | 7.4438 | | 0.0008 | 220.0 | 1100 | 7.2700 | | 0.0008 | 221.0 | 1105 | 7.9172 | | 0.0008 | 222.0 | 1110 | 8.0576 | | 0.0008 | 223.0 | 1115 | 7.8573 | | 0.0008 | 224.0 | 1120 | 7.5522 | | 0.0008 | 225.0 | 1125 | 6.9782 | | 0.0008 | 226.0 | 1130 | 7.1503 | | 0.0008 | 227.0 | 1135 | 7.4730 | | 0.0008 | 228.0 | 1140 | 7.9370 | | 0.0008 | 229.0 | 1145 | 7.7869 | | 0.0008 | 230.0 | 1150 | 7.0022 | | 0.0008 | 231.0 | 1155 | 6.6475 | | 0.0008 | 232.0 | 1160 | 7.2424 | | 0.0008 | 233.0 | 1165 | 7.8082 | | 0.0008 | 234.0 | 1170 | 7.4245 | | 0.0008 | 235.0 | 1175 | 6.8436 | | 0.0008 | 236.0 | 1180 | 6.5346 | | 0.0008 | 237.0 | 1185 | 6.3053 | | 0.0008 | 238.0 | 1190 | 6.4380 | | 0.0008 | 239.0 | 1195 | 7.2193 | | 0.0008 | 240.0 | 1200 | 7.6425 | | 0.0008 | 241.0 | 1205 | 7.6422 | | 0.0008 | 242.0 | 1210 | 7.3299 | | 0.0008 | 243.0 | 1215 | 7.3236 | | 0.0008 | 244.0 | 1220 | 7.2895 | | 0.0008 | 245.0 | 1225 | 7.2075 | | 0.0008 | 246.0 | 1230 | 6.7947 | | 0.0008 | 247.0 | 1235 | 6.3677 | | 0.0008 | 248.0 | 1240 | 6.7713 | | 0.0008 | 249.0 | 1245 | 7.0393 | | 0.0008 | 250.0 | 1250 | 7.1133 | | 0.0008 | 251.0 | 1255 | 7.1206 | | 0.0008 | 252.0 | 1260 | 6.7066 | | 0.0008 | 253.0 | 1265 | 6.4519 | | 0.0008 | 254.0 | 1270 | 6.3316 | | 0.0008 | 255.0 | 1275 | 6.4723 | | 0.0008 | 256.0 | 1280 | 7.1585 | | 0.0008 | 257.0 | 1285 | 7.4964 | | 0.0008 | 258.0 | 1290 | 7.0122 | | 0.0008 | 259.0 | 1295 | 6.3236 | | 0.0008 | 260.0 | 1300 | 6.0201 | | 0.0008 | 261.0 | 1305 | 6.0107 | | 0.0008 | 262.0 | 1310 | 6.9438 | | 0.0008 | 263.0 | 1315 | 7.4045 | | 0.0008 | 264.0 | 1320 | 7.4676 | | 0.0008 | 265.0 | 1325 | 7.3376 | | 0.0008 | 266.0 | 1330 | 7.1473 | | 0.0008 | 267.0 | 1335 | 6.7904 | | 0.0008 | 268.0 | 1340 | 6.3593 | | 0.0008 | 269.0 | 1345 | 6.1926 | | 0.0008 | 270.0 | 1350 | 6.4038 | | 0.0008 | 271.0 | 1355 | 7.1407 | | 0.0008 | 272.0 | 1360 | 7.7933 | | 0.0008 | 273.0 | 1365 | 8.0508 | | 0.0008 | 274.0 | 1370 | 8.1024 | | 0.0008 | 275.0 | 1375 | 8.0394 | | 0.0008 | 276.0 | 1380 | 7.8654 | | 0.0008 | 277.0 | 1385 | 7.3339 | | 0.0008 | 278.0 | 1390 | 6.6394 | | 0.0008 | 279.0 | 1395 | 6.3018 | | 0.0008 | 280.0 | 1400 | 6.4279 | | 0.0008 | 281.0 | 1405 | 6.6617 | | 0.0008 | 282.0 | 1410 | 6.8492 | | 0.0008 | 283.0 | 1415 | 6.9763 | | 0.0008 | 284.0 | 1420 | 6.9586 | | 0.0008 | 285.0 | 1425 | 6.9465 | | 0.0008 | 286.0 | 1430 | 6.9751 | | 0.0008 | 287.0 | 1435 | 6.9967 | | 0.0008 | 288.0 | 1440 | 6.9526 | | 0.0008 | 289.0 | 1445 | 7.2242 | | 0.0008 | 290.0 | 1450 | 7.4093 | | 0.0008 | 291.0 | 1455 | 7.4869 | | 0.0008 | 292.0 | 1460 | 7.4688 | | 0.0008 | 293.0 | 1465 | 7.4282 | | 0.0008 | 294.0 | 1470 | 7.3745 | | 0.0008 | 295.0 | 1475 | 7.2768 | | 0.0008 | 296.0 | 1480 | 7.0427 | | 0.0008 | 297.0 | 1485 | 6.8857 | | 0.0008 | 298.0 | 1490 | 6.8009 | | 0.0008 | 299.0 | 1495 | 6.7757 | | 0.0016 | 300.0 | 1500 | 6.7695 | | 0.0016 | 301.0 | 1505 | 6.8204 | | 0.0016 | 302.0 | 1510 | 6.8644 | | 0.0016 | 303.0 | 1515 | 6.9037 | | 0.0016 | 304.0 | 1520 | 6.9387 | | 0.0016 | 305.0 | 1525 | 6.9556 | | 0.0016 | 306.0 | 1530 | 6.9667 | | 0.0016 | 307.0 | 1535 | 6.9767 | | 0.0016 | 308.0 | 1540 | 6.9847 | | 0.0016 | 309.0 | 1545 | 6.9937 | | 0.0016 | 310.0 | 1550 | 7.0039 | | 0.0016 | 311.0 | 1555 | 7.0099 | | 0.0016 | 312.0 | 1560 | 7.0131 | | 0.0016 | 313.0 | 1565 | 7.0060 | | 0.0016 | 314.0 | 1570 | 7.0075 | | 0.0016 | 315.0 | 1575 | 6.9711 | | 0.0016 | 316.0 | 1580 | 6.9077 | | 0.0016 | 317.0 | 1585 | 6.8807 | | 0.0016 | 318.0 | 1590 | 6.8715 | | 0.0016 | 319.0 | 1595 | 7.0453 | | 0.0016 | 320.0 | 1600 | 7.1296 | | 0.0016 | 321.0 | 1605 | 7.1700 | | 0.0016 | 322.0 | 1610 | 7.1957 | | 0.0016 | 323.0 | 1615 | 7.2246 | | 0.0016 | 324.0 | 1620 | 7.2203 | | 0.0016 | 325.0 | 1625 | 7.1906 | | 0.0016 | 326.0 | 1630 | 7.1123 | | 0.0016 | 327.0 | 1635 | 7.0487 | | 0.0016 | 328.0 | 1640 | 7.0754 | | 0.0016 | 329.0 | 1645 | 7.0720 | | 0.0016 | 330.0 | 1650 | 7.7359 | | 0.0016 | 331.0 | 1655 | 8.1144 | | 0.0016 | 332.0 | 1660 | 8.1948 | | 0.0016 | 333.0 | 1665 | 7.6789 | | 0.0016 | 334.0 | 1670 | 7.2641 | | 0.0016 | 335.0 | 1675 | 7.0059 | | 0.0016 | 336.0 | 1680 | 6.8664 | | 0.0016 | 337.0 | 1685 | 7.6781 | | 0.0016 | 338.0 | 1690 | 8.2166 | | 0.0016 | 339.0 | 1695 | 8.2097 | | 0.0016 | 340.0 | 1700 | 8.0633 | | 0.0016 | 341.0 | 1705 | 7.7949 | | 0.0016 | 342.0 | 1710 | 7.6837 | | 0.0016 | 343.0 | 1715 | 8.0097 | | 0.0016 | 344.0 | 1720 | 8.5007 | | 0.0016 | 345.0 | 1725 | 8.7010 | | 0.0016 | 346.0 | 1730 | 8.0451 | | 0.0016 | 347.0 | 1735 | 7.6189 | | 0.0016 | 348.0 | 1740 | 7.4604 | | 0.0016 | 349.0 | 1745 | 7.5163 | | 0.0016 | 350.0 | 1750 | 7.5746 | | 0.0016 | 351.0 | 1755 | 7.4632 | | 0.0016 | 352.0 | 1760 | 7.1102 | | 0.0016 | 353.0 | 1765 | 6.4877 | | 0.0016 | 354.0 | 1770 | 6.3033 | | 0.0016 | 355.0 | 1775 | 6.6979 | | 0.0016 | 356.0 | 1780 | 7.0705 | | 0.0016 | 357.0 | 1785 | 7.3203 | | 0.0016 | 358.0 | 1790 | 7.4830 | | 0.0016 | 359.0 | 1795 | 7.6461 | | 0.0016 | 360.0 | 1800 | 7.7119 | | 0.0016 | 361.0 | 1805 | 7.5914 | | 0.0016 | 362.0 | 1810 | 7.3182 | | 0.0016 | 363.0 | 1815 | 7.3903 | | 0.0016 | 364.0 | 1820 | 8.0079 | | 0.0016 | 365.0 | 1825 | 8.0597 | | 0.0016 | 366.0 | 1830 | 7.8428 | | 0.0016 | 367.0 | 1835 | 7.5143 | | 0.0016 | 368.0 | 1840 | 7.2284 | | 0.0016 | 369.0 | 1845 | 7.0162 | | 0.0016 | 370.0 | 1850 | 6.8926 | | 0.0016 | 371.0 | 1855 | 7.3822 | | 0.0016 | 372.0 | 1860 | 7.8967 | | 0.0016 | 373.0 | 1865 | 8.0392 | | 0.0016 | 374.0 | 1870 | 7.9983 | | 0.0016 | 375.0 | 1875 | 7.8815 | | 0.0016 | 376.0 | 1880 | 7.8492 | | 0.0016 | 377.0 | 1885 | 7.8035 | | 0.0016 | 378.0 | 1890 | 7.4683 | | 0.0016 | 379.0 | 1895 | 7.3420 | | 0.0016 | 380.0 | 1900 | 7.3416 | | 0.0016 | 381.0 | 1905 | 7.3430 | | 0.0016 | 382.0 | 1910 | 7.2376 | | 0.0016 | 383.0 | 1915 | 7.1507 | | 0.0016 | 384.0 | 1920 | 7.0881 | | 0.0016 | 385.0 | 1925 | 7.0704 | | 0.0016 | 386.0 | 1930 | 7.0555 | | 0.0016 | 387.0 | 1935 | 7.0502 | | 0.0016 | 388.0 | 1940 | 7.0534 | | 0.0016 | 389.0 | 1945 | 7.0915 | | 0.0016 | 390.0 | 1950 | 7.1284 | | 0.0016 | 391.0 | 1955 | 7.1272 | | 0.0016 | 392.0 | 1960 | 7.1419 | | 0.0016 | 393.0 | 1965 | 7.1658 | | 0.0016 | 394.0 | 1970 | 7.1974 | | 0.0016 | 395.0 | 1975 | 7.2366 | | 0.0016 | 396.0 | 1980 | 7.3965 | | 0.0016 | 397.0 | 1985 | 7.5398 | | 0.0016 | 398.0 | 1990 | 7.5613 | | 0.0016 | 399.0 | 1995 | 7.5672 | | 0.0005 | 400.0 | 2000 | 7.5680 | | 0.0005 | 401.0 | 2005 | 7.5623 | | 0.0005 | 402.0 | 2010 | 7.5433 | | 0.0005 | 403.0 | 2015 | 7.5265 | | 0.0005 | 404.0 | 2020 | 7.4945 | | 0.0005 | 405.0 | 2025 | 7.4549 | | 0.0005 | 406.0 | 2030 | 7.4577 | | 0.0005 | 407.0 | 2035 | 7.4779 | | 0.0005 | 408.0 | 2040 | 7.4891 | | 0.0005 | 409.0 | 2045 | 7.4967 | | 0.0005 | 410.0 | 2050 | 7.4107 | | 0.0005 | 411.0 | 2055 | 7.1431 | | 0.0005 | 412.0 | 2060 | 7.0035 | | 0.0005 | 413.0 | 2065 | 6.9515 | | 0.0005 | 414.0 | 2070 | 7.8803 | | 0.0005 | 415.0 | 2075 | 8.3234 | | 0.0005 | 416.0 | 2080 | 8.2208 | | 0.0005 | 417.0 | 2085 | 7.7650 | | 0.0005 | 418.0 | 2090 | 7.4476 | | 0.0005 | 419.0 | 2095 | 7.2501 | | 0.0005 | 420.0 | 2100 | 7.1334 | | 0.0005 | 421.0 | 2105 | 7.0705 | | 0.0005 | 422.0 | 2110 | 7.1738 | | 0.0005 | 423.0 | 2115 | 7.5227 | | 0.0005 | 424.0 | 2120 | 7.6620 | | 0.0005 | 425.0 | 2125 | 7.6966 | | 0.0005 | 426.0 | 2130 | 7.6232 | | 0.0005 | 427.0 | 2135 | 7.8679 | | 0.0005 | 428.0 | 2140 | 8.0675 | | 0.0005 | 429.0 | 2145 | 8.1644 | | 0.0005 | 430.0 | 2150 | 8.1593 | | 0.0005 | 431.0 | 2155 | 7.9555 | | 0.0005 | 432.0 | 2160 | 7.7445 | | 0.0005 | 433.0 | 2165 | 7.6033 | | 0.0005 | 434.0 | 2170 | 7.4350 | | 0.0005 | 435.0 | 2175 | 7.0173 | | 0.0005 | 436.0 | 2180 | 6.6588 | | 0.0005 | 437.0 | 2185 | 6.4948 | | 0.0005 | 438.0 | 2190 | 6.5824 | | 0.0005 | 439.0 | 2195 | 7.6368 | | 0.0005 | 440.0 | 2200 | 8.1728 | | 0.0005 | 441.0 | 2205 | 8.2726 | | 0.0005 | 442.0 | 2210 | 7.7469 | | 0.0005 | 443.0 | 2215 | 7.2934 | | 0.0005 | 444.0 | 2220 | 7.5596 | | 0.0005 | 445.0 | 2225 | 8.1491 | | 0.0005 | 446.0 | 2230 | 8.3301 | | 0.0005 | 447.0 | 2235 | 8.2448 | | 0.0005 | 448.0 | 2240 | 7.8104 | | 0.0005 | 449.0 | 2245 | 7.9229 | | 0.0005 | 450.0 | 2250 | 8.1022 | | 0.0005 | 451.0 | 2255 | 8.0934 | | 0.0005 | 452.0 | 2260 | 7.7094 | | 0.0005 | 453.0 | 2265 | 7.4145 | | 0.0005 | 454.0 | 2270 | 7.2390 | | 0.0005 | 455.0 | 2275 | 7.1232 | | 0.0005 | 456.0 | 2280 | 7.0217 | | 0.0005 | 457.0 | 2285 | 6.9630 | | 0.0005 | 458.0 | 2290 | 6.9990 | | 0.0005 | 459.0 | 2295 | 7.0975 | | 0.0005 | 460.0 | 2300 | 7.2731 | | 0.0005 | 461.0 | 2305 | 7.3999 | | 0.0005 | 462.0 | 2310 | 7.4908 | | 0.0005 | 463.0 | 2315 | 7.5303 | | 0.0005 | 464.0 | 2320 | 7.5396 | | 0.0005 | 465.0 | 2325 | 7.7371 | | 0.0005 | 466.0 | 2330 | 7.9230 | | 0.0005 | 467.0 | 2335 | 7.8567 | | 0.0005 | 468.0 | 2340 | 7.6849 | | 0.0005 | 469.0 | 2345 | 7.4645 | | 0.0005 | 470.0 | 2350 | 7.2725 | | 0.0005 | 471.0 | 2355 | 7.1459 | | 0.0005 | 472.0 | 2360 | 7.0676 | | 0.0005 | 473.0 | 2365 | 7.0132 | | 0.0005 | 474.0 | 2370 | 7.2372 | | 0.0005 | 475.0 | 2375 | 7.3722 | | 0.0005 | 476.0 | 2380 | 7.4377 | | 0.0005 | 477.0 | 2385 | 7.4674 | | 0.0005 | 478.0 | 2390 | 7.5110 | | 0.0005 | 479.0 | 2395 | 7.5325 | | 0.0005 | 480.0 | 2400 | 7.5274 | | 0.0005 | 481.0 | 2405 | 7.4662 | | 0.0005 | 482.0 | 2410 | 7.4494 | | 0.0005 | 483.0 | 2415 | 7.4524 | | 0.0005 | 484.0 | 2420 | 7.4031 | | 0.0005 | 485.0 | 2425 | 7.3314 | | 0.0005 | 486.0 | 2430 | 7.2969 | | 0.0005 | 487.0 | 2435 | 7.2840 | | 0.0005 | 488.0 | 2440 | 7.2786 | | 0.0005 | 489.0 | 2445 | 7.2734 | | 0.0005 | 490.0 | 2450 | 7.2686 | | 0.0005 | 491.0 | 2455 | 7.2638 | | 0.0005 | 492.0 | 2460 | 7.2776 | | 0.0005 | 493.0 | 2465 | 7.2766 | | 0.0005 | 494.0 | 2470 | 7.2540 | | 0.0005 | 495.0 | 2475 | 7.2411 | | 0.0005 | 496.0 | 2480 | 7.2265 | | 0.0005 | 497.0 | 2485 | 7.2093 | | 0.0005 | 498.0 | 2490 | 7.1993 | | 0.0005 | 499.0 | 2495 | 7.1873 | | 0.0003 | 500.0 | 2500 | 7.2056 | | 0.0003 | 501.0 | 2505 | 7.1856 | | 0.0003 | 502.0 | 2510 | 7.1596 | | 0.0003 | 503.0 | 2515 | 7.1560 | | 0.0003 | 504.0 | 2520 | 7.1538 | | 0.0003 | 505.0 | 2525 | 7.1515 | | 0.0003 | 506.0 | 2530 | 7.1467 | | 0.0003 | 507.0 | 2535 | 7.1436 | | 0.0003 | 508.0 | 2540 | 7.1430 | | 0.0003 | 509.0 | 2545 | 7.1430 | | 0.0003 | 510.0 | 2550 | 7.1457 | | 0.0003 | 511.0 | 2555 | 7.1455 | | 0.0003 | 512.0 | 2560 | 7.1454 | | 0.0003 | 513.0 | 2565 | 7.1502 | | 0.0003 | 514.0 | 2570 | 7.1590 | | 0.0003 | 515.0 | 2575 | 7.1663 | | 0.0003 | 516.0 | 2580 | 7.1721 | | 0.0003 | 517.0 | 2585 | 7.3175 | | 0.0003 | 518.0 | 2590 | 7.4024 | | 0.0003 | 519.0 | 2595 | 7.4591 | | 0.0003 | 520.0 | 2600 | 7.5005 | | 0.0003 | 521.0 | 2605 | 7.5190 | | 0.0003 | 522.0 | 2610 | 7.5244 | | 0.0003 | 523.0 | 2615 | 7.5142 | | 0.0003 | 524.0 | 2620 | 7.5016 | | 0.0003 | 525.0 | 2625 | 7.4944 | | 0.0003 | 526.0 | 2630 | 7.4822 | | 0.0003 | 527.0 | 2635 | 7.4619 | | 0.0003 | 528.0 | 2640 | 7.4452 | | 0.0003 | 529.0 | 2645 | 7.4272 | | 0.0003 | 530.0 | 2650 | 7.4127 | | 0.0003 | 531.0 | 2655 | 7.3964 | | 0.0003 | 532.0 | 2660 | 7.3839 | | 0.0003 | 533.0 | 2665 | 7.3774 | | 0.0003 | 534.0 | 2670 | 7.3751 | | 0.0003 | 535.0 | 2675 | 7.3757 | | 0.0003 | 536.0 | 2680 | 7.3995 | | 0.0003 | 537.0 | 2685 | 7.5061 | | 0.0003 | 538.0 | 2690 | 7.6753 | | 0.0003 | 539.0 | 2695 | 7.8693 | | 0.0003 | 540.0 | 2700 | 7.9629 | | 0.0003 | 541.0 | 2705 | 7.9955 | | 0.0003 | 542.0 | 2710 | 7.9871 | | 0.0003 | 543.0 | 2715 | 7.9481 | | 0.0003 | 544.0 | 2720 | 7.8922 | | 0.0003 | 545.0 | 2725 | 7.8483 | | 0.0003 | 546.0 | 2730 | 7.7363 | | 0.0003 | 547.0 | 2735 | 7.6099 | | 0.0003 | 548.0 | 2740 | 7.5148 | | 0.0003 | 549.0 | 2745 | 7.4265 | | 0.0003 | 550.0 | 2750 | 7.3395 | | 0.0003 | 551.0 | 2755 | 7.2852 | | 0.0003 | 552.0 | 2760 | 7.2260 | | 0.0003 | 553.0 | 2765 | 7.1631 | | 0.0003 | 554.0 | 2770 | 7.0940 | | 0.0003 | 555.0 | 2775 | 7.0527 | | 0.0003 | 556.0 | 2780 | 7.0263 | | 0.0003 | 557.0 | 2785 | 7.0268 | | 0.0003 | 558.0 | 2790 | 7.0415 | | 0.0003 | 559.0 | 2795 | 7.0553 | | 0.0003 | 560.0 | 2800 | 7.0820 | | 0.0003 | 561.0 | 2805 | 7.0881 | | 0.0003 | 562.0 | 2810 | 7.1004 | | 0.0003 | 563.0 | 2815 | 7.1222 | | 0.0003 | 564.0 | 2820 | 7.1388 | | 0.0003 | 565.0 | 2825 | 7.1178 | | 0.0003 | 566.0 | 2830 | 7.0618 | | 0.0003 | 567.0 | 2835 | 7.0635 | | 0.0003 | 568.0 | 2840 | 7.0807 | | 0.0003 | 569.0 | 2845 | 7.1122 | | 0.0003 | 570.0 | 2850 | 7.1495 | | 0.0003 | 571.0 | 2855 | 7.4825 | | 0.0003 | 572.0 | 2860 | 7.7928 | | 0.0003 | 573.0 | 2865 | 7.9377 | | 0.0003 | 574.0 | 2870 | 7.9796 | | 0.0003 | 575.0 | 2875 | 7.9407 | | 0.0003 | 576.0 | 2880 | 7.8785 | | 0.0003 | 577.0 | 2885 | 7.7661 | | 0.0003 | 578.0 | 2890 | 7.6202 | | 0.0003 | 579.0 | 2895 | 7.4375 | | 0.0003 | 580.0 | 2900 | 7.3003 | | 0.0003 | 581.0 | 2905 | 7.2421 | | 0.0003 | 582.0 | 2910 | 7.2144 | | 0.0003 | 583.0 | 2915 | 7.2008 | | 0.0003 | 584.0 | 2920 | 7.1943 | | 0.0003 | 585.0 | 2925 | 7.1818 | | 0.0003 | 586.0 | 2930 | 7.1649 | | 0.0003 | 587.0 | 2935 | 7.1678 | | 0.0003 | 588.0 | 2940 | 7.1682 | | 0.0003 | 589.0 | 2945 | 7.1682 | | 0.0003 | 590.0 | 2950 | 7.1820 | | 0.0003 | 591.0 | 2955 | 7.1733 | | 0.0003 | 592.0 | 2960 | 7.1073 | | 0.0003 | 593.0 | 2965 | 7.0944 | | 0.0003 | 594.0 | 2970 | 7.0930 | | 0.0003 | 595.0 | 2975 | 7.0540 | | 0.0003 | 596.0 | 2980 | 7.0093 | | 0.0003 | 597.0 | 2985 | 7.0004 | | 0.0003 | 598.0 | 2990 | 7.0067 | | 0.0003 | 599.0 | 2995 | 7.0881 | | 0.0 | 600.0 | 3000 | 7.1548 | | 0.0 | 601.0 | 3005 | 7.2375 | | 0.0 | 602.0 | 3010 | 7.2878 | | 0.0 | 603.0 | 3015 | 7.3185 | | 0.0 | 604.0 | 3020 | 7.3354 | | 0.0 | 605.0 | 3025 | 7.3406 | | 0.0 | 606.0 | 3030 | 7.3317 | | 0.0 | 607.0 | 3035 | 7.3047 | | 0.0 | 608.0 | 3040 | 7.3201 | | 0.0 | 609.0 | 3045 | 7.3473 | | 0.0 | 610.0 | 3050 | 7.3646 | | 0.0 | 611.0 | 3055 | 7.3762 | | 0.0 | 612.0 | 3060 | 7.4159 | | 0.0 | 613.0 | 3065 | 7.4055 | | 0.0 | 614.0 | 3070 | 7.3618 | | 0.0 | 615.0 | 3075 | 7.3422 | | 0.0 | 616.0 | 3080 | 7.3280 | | 0.0 | 617.0 | 3085 | 7.3244 | | 0.0 | 618.0 | 3090 | 7.3479 | | 0.0 | 619.0 | 3095 | 7.3583 | | 0.0 | 620.0 | 3100 | 7.3661 | | 0.0 | 621.0 | 3105 | 7.3711 | | 0.0 | 622.0 | 3110 | 7.3755 | | 0.0 | 623.0 | 3115 | 7.3852 | | 0.0 | 624.0 | 3120 | 7.3909 | | 0.0 | 625.0 | 3125 | 7.3913 | | 0.0 | 626.0 | 3130 | 7.3942 | | 0.0 | 627.0 | 3135 | 7.3957 | | 0.0 | 628.0 | 3140 | 7.3552 | | 0.0 | 629.0 | 3145 | 7.3116 | | 0.0 | 630.0 | 3150 | 7.2359 | | 0.0 | 631.0 | 3155 | 7.1832 | | 0.0 | 632.0 | 3160 | 7.4450 | | 0.0 | 633.0 | 3165 | 7.8032 | | 0.0 | 634.0 | 3170 | 8.0078 | | 0.0 | 635.0 | 3175 | 8.0751 | | 0.0 | 636.0 | 3180 | 8.1035 | | 0.0 | 637.0 | 3185 | 8.1090 | | 0.0 | 638.0 | 3190 | 8.0995 | | 0.0 | 639.0 | 3195 | 8.0641 | | 0.0 | 640.0 | 3200 | 8.0800 | | 0.0 | 641.0 | 3205 | 8.2266 | | 0.0 | 642.0 | 3210 | 8.2961 | | 0.0 | 643.0 | 3215 | 8.3050 | | 0.0 | 644.0 | 3220 | 8.2454 | | 0.0 | 645.0 | 3225 | 8.0781 | | 0.0 | 646.0 | 3230 | 7.9448 | | 0.0 | 647.0 | 3235 | 8.4501 | | 0.0 | 648.0 | 3240 | 8.8742 | | 0.0 | 649.0 | 3245 | 8.9382 | | 0.0 | 650.0 | 3250 | 8.8590 | | 0.0 | 651.0 | 3255 | 8.7752 | | 0.0 | 652.0 | 3260 | 9.0839 | | 0.0 | 653.0 | 3265 | 9.2408 | | 0.0 | 654.0 | 3270 | 8.7313 | | 0.0 | 655.0 | 3275 | 8.1789 | | 0.0 | 656.0 | 3280 | 7.9457 | | 0.0 | 657.0 | 3285 | 8.2609 | | 0.0 | 658.0 | 3290 | 8.8193 | | 0.0 | 659.0 | 3295 | 8.9991 | | 0.0 | 660.0 | 3300 | 8.9444 | | 0.0 | 661.0 | 3305 | 8.7020 | | 0.0 | 662.0 | 3310 | 8.4658 | | 0.0 | 663.0 | 3315 | 8.1487 | | 0.0 | 664.0 | 3320 | 7.9372 | | 0.0 | 665.0 | 3325 | 7.8059 | | 0.0 | 666.0 | 3330 | 7.7367 | | 0.0 | 667.0 | 3335 | 7.7029 | | 0.0 | 668.0 | 3340 | 7.7040 | | 0.0 | 669.0 | 3345 | 7.7337 | | 0.0 | 670.0 | 3350 | 7.7597 | | 0.0 | 671.0 | 3355 | 7.7904 | | 0.0 | 672.0 | 3360 | 7.8477 | | 0.0 | 673.0 | 3365 | 7.9769 | | 0.0 | 674.0 | 3370 | 8.0324 | | 0.0 | 675.0 | 3375 | 8.0651 | | 0.0 | 676.0 | 3380 | 8.0791 | | 0.0 | 677.0 | 3385 | 8.0787 | | 0.0 | 678.0 | 3390 | 8.0685 | | 0.0 | 679.0 | 3395 | 8.0537 | | 0.0 | 680.0 | 3400 | 8.0366 | | 0.0 | 681.0 | 3405 | 8.0120 | | 0.0 | 682.0 | 3410 | 7.9724 | | 0.0 | 683.0 | 3415 | 7.9284 | | 0.0 | 684.0 | 3420 | 7.9102 | | 0.0 | 685.0 | 3425 | 7.9282 | | 0.0 | 686.0 | 3430 | 7.9453 | | 0.0 | 687.0 | 3435 | 8.0253 | | 0.0 | 688.0 | 3440 | 8.1062 | | 0.0 | 689.0 | 3445 | 8.4114 | | 0.0 | 690.0 | 3450 | 8.6330 | | 0.0 | 691.0 | 3455 | 8.7306 | | 0.0 | 692.0 | 3460 | 8.7663 | | 0.0 | 693.0 | 3465 | 8.7704 | | 0.0 | 694.0 | 3470 | 8.7281 | | 0.0 | 695.0 | 3475 | 8.5414 | | 0.0 | 696.0 | 3480 | 8.4111 | | 0.0 | 697.0 | 3485 | 8.3237 | | 0.0 | 698.0 | 3490 | 8.2621 | | 0.0 | 699.0 | 3495 | 8.2238 | | 0.0003 | 700.0 | 3500 | 8.2110 | | 0.0003 | 701.0 | 3505 | 8.2046 | | 0.0003 | 702.0 | 3510 | 8.2012 | | 0.0003 | 703.0 | 3515 | 8.1988 | | 0.0003 | 704.0 | 3520 | 8.1966 | | 0.0003 | 705.0 | 3525 | 8.1963 | | 0.0003 | 706.0 | 3530 | 8.1990 | | 0.0003 | 707.0 | 3535 | 8.1923 | | 0.0003 | 708.0 | 3540 | 8.1445 | | 0.0003 | 709.0 | 3545 | 8.0983 | | 0.0003 | 710.0 | 3550 | 7.9954 | | 0.0003 | 711.0 | 3555 | 8.1276 | | 0.0003 | 712.0 | 3560 | 8.2012 | | 0.0003 | 713.0 | 3565 | 8.2454 | | 0.0003 | 714.0 | 3570 | 8.1835 | | 0.0003 | 715.0 | 3575 | 8.1445 | | 0.0003 | 716.0 | 3580 | 8.1275 | | 0.0003 | 717.0 | 3585 | 8.1340 | | 0.0003 | 718.0 | 3590 | 8.1430 | | 0.0003 | 719.0 | 3595 | 8.1899 | | 0.0003 | 720.0 | 3600 | 8.3494 | | 0.0003 | 721.0 | 3605 | 8.4368 | | 0.0003 | 722.0 | 3610 | 8.4475 | | 0.0003 | 723.0 | 3615 | 8.4250 | | 0.0003 | 724.0 | 3620 | 8.4015 | | 0.0003 | 725.0 | 3625 | 8.3753 | | 0.0003 | 726.0 | 3630 | 8.3584 | | 0.0003 | 727.0 | 3635 | 8.3282 | | 0.0003 | 728.0 | 3640 | 8.2987 | | 0.0003 | 729.0 | 3645 | 8.2507 | | 0.0003 | 730.0 | 3650 | 8.2132 | | 0.0003 | 731.0 | 3655 | 8.1682 | | 0.0003 | 732.0 | 3660 | 8.1149 | | 0.0003 | 733.0 | 3665 | 8.0639 | | 0.0003 | 734.0 | 3670 | 8.0315 | | 0.0003 | 735.0 | 3675 | 8.0111 | | 0.0003 | 736.0 | 3680 | 8.0127 | | 0.0003 | 737.0 | 3685 | 8.0286 | | 0.0003 | 738.0 | 3690 | 8.0844 | | 0.0003 | 739.0 | 3695 | 8.1191 | | 0.0003 | 740.0 | 3700 | 8.1408 | | 0.0003 | 741.0 | 3705 | 8.1528 | | 0.0003 | 742.0 | 3710 | 8.1587 | | 0.0003 | 743.0 | 3715 | 8.1577 | | 0.0003 | 744.0 | 3720 | 8.1560 | | 0.0003 | 745.0 | 3725 | 8.1390 | | 0.0003 | 746.0 | 3730 | 8.1245 | | 0.0003 | 747.0 | 3735 | 8.0511 | | 0.0003 | 748.0 | 3740 | 7.9951 | | 0.0003 | 749.0 | 3745 | 7.9622 | | 0.0003 | 750.0 | 3750 | 7.9511 | | 0.0003 | 751.0 | 3755 | 7.9444 | | 0.0003 | 752.0 | 3760 | 7.9350 | | 0.0003 | 753.0 | 3765 | 7.9291 | | 0.0003 | 754.0 | 3770 | 7.9111 | | 0.0003 | 755.0 | 3775 | 7.8924 | | 0.0003 | 756.0 | 3780 | 7.8798 | | 0.0003 | 757.0 | 3785 | 7.8645 | | 0.0003 | 758.0 | 3790 | 7.8458 | | 0.0003 | 759.0 | 3795 | 7.8342 | | 0.0003 | 760.0 | 3800 | 7.8409 | | 0.0003 | 761.0 | 3805 | 7.8678 | | 0.0003 | 762.0 | 3810 | 7.8830 | | 0.0003 | 763.0 | 3815 | 7.8976 | | 0.0003 | 764.0 | 3820 | 7.8980 | | 0.0003 | 765.0 | 3825 | 7.8971 | | 0.0003 | 766.0 | 3830 | 7.8976 | | 0.0003 | 767.0 | 3835 | 7.8952 | | 0.0003 | 768.0 | 3840 | 7.8917 | | 0.0003 | 769.0 | 3845 | 7.8917 | | 0.0003 | 770.0 | 3850 | 7.8936 | | 0.0003 | 771.0 | 3855 | 7.8947 | | 0.0003 | 772.0 | 3860 | 7.8918 | | 0.0003 | 773.0 | 3865 | 7.8896 | | 0.0003 | 774.0 | 3870 | 7.8895 | | 0.0003 | 775.0 | 3875 | 7.8923 | | 0.0003 | 776.0 | 3880 | 7.8978 | | 0.0003 | 777.0 | 3885 | 7.9035 | | 0.0003 | 778.0 | 3890 | 7.9438 | | 0.0003 | 779.0 | 3895 | 7.9692 | | 0.0003 | 780.0 | 3900 | 7.9863 | | 0.0003 | 781.0 | 3905 | 7.9950 | | 0.0003 | 782.0 | 3910 | 8.0015 | | 0.0003 | 783.0 | 3915 | 8.0069 | | 0.0003 | 784.0 | 3920 | 8.0109 | | 0.0003 | 785.0 | 3925 | 8.0149 | | 0.0003 | 786.0 | 3930 | 8.0187 | | 0.0003 | 787.0 | 3935 | 8.0199 | | 0.0003 | 788.0 | 3940 | 8.0209 | | 0.0003 | 789.0 | 3945 | 8.0164 | | 0.0003 | 790.0 | 3950 | 7.9772 | | 0.0003 | 791.0 | 3955 | 7.9407 | | 0.0003 | 792.0 | 3960 | 7.9087 | | 0.0003 | 793.0 | 3965 | 7.8911 | | 0.0003 | 794.0 | 3970 | 7.9289 | | 0.0003 | 795.0 | 3975 | 7.9628 | | 0.0003 | 796.0 | 3980 | 7.9680 | | 0.0003 | 797.0 | 3985 | 7.9641 | | 0.0003 | 798.0 | 3990 | 7.9619 | | 0.0003 | 799.0 | 3995 | 7.9623 | | 0.0 | 800.0 | 4000 | 7.9633 | | 0.0 | 801.0 | 4005 | 7.9648 | | 0.0 | 802.0 | 4010 | 7.9798 | | 0.0 | 803.0 | 4015 | 7.9897 | | 0.0 | 804.0 | 4020 | 7.9961 | | 0.0 | 805.0 | 4025 | 7.9887 | | 0.0 | 806.0 | 4030 | 7.9816 | | 0.0 | 807.0 | 4035 | 7.9786 | | 0.0 | 808.0 | 4040 | 7.9777 | | 0.0 | 809.0 | 4045 | 7.9770 | | 0.0 | 810.0 | 4050 | 7.9754 | | 0.0 | 811.0 | 4055 | 7.9729 | | 0.0 | 812.0 | 4060 | 7.9698 | | 0.0 | 813.0 | 4065 | 7.9702 | | 0.0 | 814.0 | 4070 | 7.9739 | | 0.0 | 815.0 | 4075 | 7.9780 | | 0.0 | 816.0 | 4080 | 7.9806 | | 0.0 | 817.0 | 4085 | 7.9826 | | 0.0 | 818.0 | 4090 | 7.9843 | | 0.0 | 819.0 | 4095 | 7.9845 | | 0.0 | 820.0 | 4100 | 7.9863 | | 0.0 | 821.0 | 4105 | 7.9889 | | 0.0 | 822.0 | 4110 | 7.9911 | | 0.0 | 823.0 | 4115 | 7.9917 | | 0.0 | 824.0 | 4120 | 7.9917 | | 0.0 | 825.0 | 4125 | 8.0033 | | 0.0 | 826.0 | 4130 | 8.0243 | | 0.0 | 827.0 | 4135 | 8.0396 | | 0.0 | 828.0 | 4140 | 8.0470 | | 0.0 | 829.0 | 4145 | 8.0425 | | 0.0 | 830.0 | 4150 | 8.0306 | | 0.0 | 831.0 | 4155 | 8.0234 | | 0.0 | 832.0 | 4160 | 8.0166 | | 0.0 | 833.0 | 4165 | 8.0121 | | 0.0 | 834.0 | 4170 | 8.0110 | | 0.0 | 835.0 | 4175 | 8.0115 | | 0.0 | 836.0 | 4180 | 8.0123 | | 0.0 | 837.0 | 4185 | 8.0130 | | 0.0 | 838.0 | 4190 | 8.0135 | | 0.0 | 839.0 | 4195 | 8.0111 | | 0.0 | 840.0 | 4200 | 8.0079 | | 0.0 | 841.0 | 4205 | 8.0060 | | 0.0 | 842.0 | 4210 | 8.0050 | | 0.0 | 843.0 | 4215 | 8.0022 | | 0.0 | 844.0 | 4220 | 8.0029 | | 0.0 | 845.0 | 4225 | 7.9907 | | 0.0 | 846.0 | 4230 | 7.9819 | | 0.0 | 847.0 | 4235 | 7.9792 | | 0.0 | 848.0 | 4240 | 7.9801 | | 0.0 | 849.0 | 4245 | 7.9793 | | 0.0 | 850.0 | 4250 | 7.9804 | | 0.0 | 851.0 | 4255 | 7.9819 | | 0.0 | 852.0 | 4260 | 7.9834 | | 0.0 | 853.0 | 4265 | 7.9841 | | 0.0 | 854.0 | 4270 | 7.9749 | | 0.0 | 855.0 | 4275 | 7.9663 | | 0.0 | 856.0 | 4280 | 7.9615 | | 0.0 | 857.0 | 4285 | 7.9586 | | 0.0 | 858.0 | 4290 | 7.9576 | | 0.0 | 859.0 | 4295 | 7.9592 | | 0.0 | 860.0 | 4300 | 7.9603 | | 0.0 | 861.0 | 4305 | 7.9620 | | 0.0 | 862.0 | 4310 | 7.9653 | | 0.0 | 863.0 | 4315 | 7.9676 | | 0.0 | 864.0 | 4320 | 7.9700 | | 0.0 | 865.0 | 4325 | 7.9705 | | 0.0 | 866.0 | 4330 | 7.9704 | | 0.0 | 867.0 | 4335 | 7.9727 | | 0.0 | 868.0 | 4340 | 7.9742 | | 0.0 | 869.0 | 4345 | 7.9749 | | 0.0 | 870.0 | 4350 | 7.9759 | | 0.0 | 871.0 | 4355 | 7.9768 | | 0.0 | 872.0 | 4360 | 7.9808 | | 0.0 | 873.0 | 4365 | 7.9833 | | 0.0 | 874.0 | 4370 | 7.9846 | | 0.0 | 875.0 | 4375 | 7.9865 | | 0.0 | 876.0 | 4380 | 7.9886 | | 0.0 | 877.0 | 4385 | 7.9896 | | 0.0 | 878.0 | 4390 | 7.9879 | | 0.0 | 879.0 | 4395 | 7.9882 | | 0.0 | 880.0 | 4400 | 7.9908 | | 0.0 | 881.0 | 4405 | 7.9923 | | 0.0 | 882.0 | 4410 | 7.9940 | | 0.0 | 883.0 | 4415 | 8.0049 | | 0.0 | 884.0 | 4420 | 8.0110 | | 0.0 | 885.0 | 4425 | 8.0147 | | 0.0 | 886.0 | 4430 | 8.0186 | | 0.0 | 887.0 | 4435 | 8.0218 | | 0.0 | 888.0 | 4440 | 8.0243 | | 0.0 | 889.0 | 4445 | 8.0263 | | 0.0 | 890.0 | 4450 | 8.0270 | | 0.0 | 891.0 | 4455 | 8.0259 | | 0.0 | 892.0 | 4460 | 8.0262 | | 0.0 | 893.0 | 4465 | 8.0251 | | 0.0 | 894.0 | 4470 | 8.0222 | | 0.0 | 895.0 | 4475 | 8.0264 | | 0.0 | 896.0 | 4480 | 8.0314 | | 0.0 | 897.0 | 4485 | 7.9993 | | 0.0 | 898.0 | 4490 | 7.9422 | | 0.0 | 899.0 | 4495 | 7.9029 | | 0.0 | 900.0 | 4500 | 7.8787 | | 0.0 | 901.0 | 4505 | 7.8649 | | 0.0 | 902.0 | 4510 | 7.8576 | | 0.0 | 903.0 | 4515 | 7.8572 | | 0.0 | 904.0 | 4520 | 7.8583 | | 0.0 | 905.0 | 4525 | 7.8608 | | 0.0 | 906.0 | 4530 | 7.8629 | | 0.0 | 907.0 | 4535 | 7.8646 | | 0.0 | 908.0 | 4540 | 7.8653 | | 0.0 | 909.0 | 4545 | 7.8648 | | 0.0 | 910.0 | 4550 | 7.8648 | | 0.0 | 911.0 | 4555 | 7.8656 | | 0.0 | 912.0 | 4560 | 7.8664 | | 0.0 | 913.0 | 4565 | 7.8673 | | 0.0 | 914.0 | 4570 | 7.8652 | | 0.0 | 915.0 | 4575 | 7.8608 | | 0.0 | 916.0 | 4580 | 7.8587 | | 0.0 | 917.0 | 4585 | 7.8582 | | 0.0 | 918.0 | 4590 | 7.8578 | | 0.0 | 919.0 | 4595 | 7.8576 | | 0.0 | 920.0 | 4600 | 7.8601 | | 0.0 | 921.0 | 4605 | 7.8619 | | 0.0 | 922.0 | 4610 | 7.8625 | | 0.0 | 923.0 | 4615 | 7.8608 | | 0.0 | 924.0 | 4620 | 7.8602 | | 0.0 | 925.0 | 4625 | 7.8618 | | 0.0 | 926.0 | 4630 | 7.8637 | | 0.0 | 927.0 | 4635 | 7.8667 | | 0.0 | 928.0 | 4640 | 7.8688 | | 0.0 | 929.0 | 4645 | 7.8704 | | 0.0 | 930.0 | 4650 | 8.0677 | | 0.0 | 931.0 | 4655 | 8.2072 | | 0.0 | 932.0 | 4660 | 8.2881 | | 0.0 | 933.0 | 4665 | 8.3336 | | 0.0 | 934.0 | 4670 | 8.3598 | | 0.0 | 935.0 | 4675 | 8.3735 | | 0.0 | 936.0 | 4680 | 8.3807 | | 0.0 | 937.0 | 4685 | 8.3839 | | 0.0 | 938.0 | 4690 | 8.3851 | | 0.0 | 939.0 | 4695 | 8.3839 | | 0.0 | 940.0 | 4700 | 8.3757 | | 0.0 | 941.0 | 4705 | 8.3662 | | 0.0 | 942.0 | 4710 | 8.3601 | | 0.0 | 943.0 | 4715 | 8.3562 | | 0.0 | 944.0 | 4720 | 8.3537 | | 0.0 | 945.0 | 4725 | 8.3519 | | 0.0 | 946.0 | 4730 | 8.3451 | | 0.0 | 947.0 | 4735 | 8.3340 | | 0.0 | 948.0 | 4740 | 8.3263 | | 0.0 | 949.0 | 4745 | 8.3216 | | 0.0 | 950.0 | 4750 | 8.3179 | | 0.0 | 951.0 | 4755 | 8.3154 | | 0.0 | 952.0 | 4760 | 8.3134 | | 0.0 | 953.0 | 4765 | 8.3076 | | 0.0 | 954.0 | 4770 | 8.3021 | | 0.0 | 955.0 | 4775 | 8.2970 | | 0.0 | 956.0 | 4780 | 8.2906 | | 0.0 | 957.0 | 4785 | 8.2868 | | 0.0 | 958.0 | 4790 | 8.2843 | | 0.0 | 959.0 | 4795 | 8.2827 | | 0.0 | 960.0 | 4800 | 8.2812 | | 0.0 | 961.0 | 4805 | 8.2718 | | 0.0 | 962.0 | 4810 | 8.2711 | | 0.0 | 963.0 | 4815 | 8.2724 | | 0.0 | 964.0 | 4820 | 8.2733 | | 0.0 | 965.0 | 4825 | 8.2742 | | 0.0 | 966.0 | 4830 | 8.2747 | | 0.0 | 967.0 | 4835 | 8.2738 | | 0.0 | 968.0 | 4840 | 8.2638 | | 0.0 | 969.0 | 4845 | 8.2467 | | 0.0 | 970.0 | 4850 | 8.2368 | | 0.0 | 971.0 | 4855 | 8.2293 | | 0.0 | 972.0 | 4860 | 8.2246 | | 0.0 | 973.0 | 4865 | 8.2216 | | 0.0 | 974.0 | 4870 | 8.2196 | | 0.0 | 975.0 | 4875 | 8.2184 | | 0.0 | 976.0 | 4880 | 8.2176 | | 0.0 | 977.0 | 4885 | 8.2154 | | 0.0 | 978.0 | 4890 | 8.2081 | | 0.0 | 979.0 | 4895 | 8.2039 | | 0.0 | 980.0 | 4900 | 8.2007 | | 0.0 | 981.0 | 4905 | 8.1990 | | 0.0 | 982.0 | 4910 | 8.1961 | | 0.0 | 983.0 | 4915 | 8.1936 | | 0.0 | 984.0 | 4920 | 8.1916 | | 0.0 | 985.0 | 4925 | 8.1905 | | 0.0 | 986.0 | 4930 | 8.1902 | | 0.0 | 987.0 | 4935 | 8.1908 | | 0.0 | 988.0 | 4940 | 8.1911 | | 0.0 | 989.0 | 4945 | 8.1912 | | 0.0 | 990.0 | 4950 | 8.1913 | | 0.0 | 991.0 | 4955 | 8.1912 | | 0.0 | 992.0 | 4960 | 8.1908 | | 0.0 | 993.0 | 4965 | 8.1901 | | 0.0 | 994.0 | 4970 | 8.1888 | | 0.0 | 995.0 | 4975 | 8.1881 | | 0.0 | 996.0 | 4980 | 8.1877 | | 0.0 | 997.0 | 4985 | 8.1876 | | 0.0 | 998.0 | 4990 | 8.1876 | | 0.0 | 999.0 | 4995 | 8.1875 | | 0.0002 | 1000.0 | 5000 | 8.1875 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.7 - Tokenizers 0.15.0
pankajmathur/model_420
pankajmathur
2024-03-04T13:01:14Z
22
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-03T06:17:15Z
--- language: - en license: llama2 library_name: transformers model-index: - name: model_420 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 70.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 28.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_420 name: Open LLM Leaderboard --- # model_420 A Llama2-70b model trained on Orca Style datasets. **P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam** ## Evaluation We evaluated model_420 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) ||| |:------:|:--------:| |**Task**|**Value**| |*ARC*|0.7014| |*HellaSwag*|0.8773| |*MMLU*|0.7035| |*TruthfulQA*|0.5400| |*Winogrande*|0.8374| |*GSM8K*|0.2858| |*DROP*|0.1435| |**Total Average**|**0.5841**| ### Prompt Foramt ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me about Orcas. ### Assistant: ``` #### OobaBooga Instructions: This model required upto 45GB GPU VRAM in 4bit so it can be loaded directly on Single RTX 6000/L40/A40/A100/H100 GPU or Double RTX 4090/L4/A10/RTX 3090/RTX A5000 So, if you have access to Machine with 45GB GPU VRAM and have installed [OobaBooga Web UI](https://github.com/oobabooga/text-generation-webui) on it. You can just download this model by using HF repo link directly on OobaBooga Web UI "Model" Tab/Page & Just use **load-in-4bit** option in it. ![model_load_screenshot](https://huggingface.co/pankajmathur/model_101/resolve/main/oobabooga_model_load_screenshot.png) After that go to Default Tab/Page on OobaBooga Web UI and **copy paste above prompt format into Input** and Enjoy! ![default_input_screenshot](https://huggingface.co/pankajmathur/model_101/resolve/main/default_input_screenshot.png) <br> #### Code Instructions: Below shows a code example on how to use this model ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("pankajmathur/model_420") model = AutoModelForCausalLM.from_pretrained( "pankajmathur/model_420", torch_dtype=torch.float16, load_in_4bit=True, low_cpu_mem_usage=True, device_map="auto" ) system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n" #generate text steps instruction = "Tell me about Orcas." prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{model_51, author = {Pankaj Mathur}, title = {model_420: An Orca Style Llama2-70b model}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/psmathur/model_51}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_420) | Metric | Value | |-----------------------|---------------------------| | Avg. | 58.41 | | ARC (25-shot) | 70.14 | | HellaSwag (10-shot) | 87.73 | | MMLU (5-shot) | 70.35 | | TruthfulQA (0-shot) | 54.0 | | Winogrande (5-shot) | 83.74 | | GSM8K (5-shot) | 28.58 | | DROP (3-shot) | 14.35 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_420) | Metric |Value| |---------------------------------|----:| |Avg. |65.76| |AI2 Reasoning Challenge (25-Shot)|70.14| |HellaSwag (10-Shot) |87.73| |MMLU (5-Shot) |70.35| |TruthfulQA (0-shot) |54.00| |Winogrande (5-shot) |83.74| |GSM8k (5-shot) |28.58|
Menouar/saqr-7b-beta
Menouar
2024-03-04T13:01:09Z
3
1
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:Menouar/saqr-7b-merged", "base_model:adapter:Menouar/saqr-7b-merged", "license:apache-2.0", "region:us" ]
null
2024-02-16T09:58:22Z
--- license: apache-2.0 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: Menouar/saqr-7b-merged model-index: - name: saqr-7b-beta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # saqr-7b-beta This model is a fine-tuned version of [Menouar/saqr-7b-merged](https://huggingface.co/Menouar/saqr-7b-merged) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Menouar__saqr-7b-beta) | Metric |Value| |---------------------------------|----:| |Avg. |44.84| |AI2 Reasoning Challenge (25-Shot)|47.78| |HellaSwag (10-Shot) |77.61| |MMLU (5-Shot) |25.80| |TruthfulQA (0-shot) |39.38| |Winogrande (5-shot) |70.56| |GSM8k (5-shot) | 7.88|
Menouar/phi-2-basic-maths
Menouar
2024-03-04T13:00:21Z
5
2
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "pytorch", "text-generation", "en", "dataset:gsm8k", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "model-index", "region:us" ]
text-generation
2024-02-05T20:15:16Z
--- language: - en license: mit library_name: peft tags: - trl - sft - generated_from_trainer - pytorch datasets: - gsm8k base_model: microsoft/phi-2 pipeline_tag: text-generation model-index: - name: phi-2-basic-maths results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 55.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 71.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 47.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthfulqa config: truthfulqa split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.4 name: mc2 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.7 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.4 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/phi-2-basic-maths name: Open LLM Leaderboard --- # phi-2-basic-maths This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an [GSM8K dataset](https://huggingface.co/datasets/gsm8k). ## Model Description The objective of this model is to evaluate Phi-2's ability to provide correct solutions to reasoning problems after fine-tuning. This model was trained using techniques such as TRL, LoRA quantization, and Flash Attention. To test it, you can use the following code: ```python import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, pipeline # Specify the model ID peft_model_id = "Menouar/phi-2-basic-maths" # Load Model with PEFT adapter model = AutoPeftModelForCausalLM.from_pretrained( peft_model_id, device_map="auto", torch_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(peft_model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` ## Training procedure The complete training procedure can be found on my [Notebook](https://colab.research.google.com/drive/1mvfoEqc0mwuf8FqrABWt06qwAsU2QrvK). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 42 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 84 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 30 ### Training results The training results can be found on [Tensoboard](https://huggingface.co/Menouar/phi-2-basic-maths/tensorboard). ## Evaluation procedure The complete Evaluation procedure can be found on my [Notebook](https://colab.research.google.com/drive/1xsdxOm-CgZmLAPFgp8iU9lLFEIIHGiUK). Accuracy: 36.16% Unclear answers: 7.81% ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Menouar__phi-2-basic-maths) | Metric |Value| |---------------------------------|----:| |Avg. |53.60| |AI2 Reasoning Challenge (25-Shot)|55.80| |HellaSwag (10-Shot) |71.15| |MMLU (5-Shot) |47.27| |TruthfulQA (0-shot) |41.40| |Winogrande (5-shot) |75.30| |GSM8k (5-shot) |30.71|
abhinand/tamil-llama-13b-base-v0.1
abhinand
2024-03-04T12:56:30Z
1,449
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ta", "en", "arxiv:2311.05845", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-07T11:44:22Z
--- language: - ta - en license: llama2 model-index: - name: tamil-llama-13b-base-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 52.82 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 79.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 52.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 36.56 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-base-v0.1 name: Open LLM Leaderboard --- # Tamil LLaMA 13B Base v0.1 [pre-trained] Welcome to the inaugural release of the Tamil LLaMA 13B base model – an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact. > **Please Note:** This model, labeled as a foundational Tamil Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Tamil, you may find [abhinand/tamil-llama-13b-instruct-v0.1](https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1) more suitable for your needs. ## Model description The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2. - **Model type:** A 13B parameter model for Causal LM pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset's Tamil subset. - **Language(s):** Tamil and English - **License:** GNU General Public License v3.0 - **Source Model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) - **Training Precision:** `float16` - **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) ## Related Models | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) | | Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) | | Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama datasets in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__tamil-llama-13b-base-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |49.50| |AI2 Reasoning Challenge (25-Shot)|52.82| |HellaSwag (10-Shot) |79.95| |MMLU (5-Shot) |52.05| |TruthfulQA (0-shot) |36.56| |Winogrande (5-shot) |75.61| |GSM8k (5-shot) | 0.00|
abhinand/gemma-2b-it-tamil-v0.1-alpha
abhinand
2024-03-04T12:56:10Z
97
5
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "en", "ta", "dataset:vicgalle/alpaca-gpt4", "dataset:abhinand/tamil-alpaca", "base_model:abhinand/gemma-2b-tamil", "base_model:finetune:abhinand/gemma-2b-tamil", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T09:59:30Z
--- language: - en - ta license: other datasets: - vicgalle/alpaca-gpt4 - abhinand/tamil-alpaca license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: abhinand/gemma-2b-tamil model-index: - name: gemma-2b-it-tamil-v0.1-alpha results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 50.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 71.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 39.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.63 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 16.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-it-tamil-v0.1-alpha name: Open LLM Leaderboard --- # Gemma 2B Tamil v0.1 Alpha [Experimental Release] This is a Tamil instruction finetuned version of Google's Gemma 2B model. This is an experiment to see if Gemma can be adapted for Tamil without expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model. **Procedure:** 1. The [Gemma base model](https://huggingface.co/google/gemma-2b) was continually pretrained on all available Tamil Wikipedia data for 3 epochs. 2. The updated model was then finetuned on a mix of English and Tamil alpaca datasets for 5 epochs. > **Note:** This project is currently under development (FOR TAMIL). The initial pretraining phase may not have been extensive enough, which suggests that the model's performance could improve by extending the pretraining on a larger dataset, such as CulturaX. ### 🏆 Benchmarks This model outperforms Google's Gemma 2B base and instruct models on all benchmarks in Nous evaluation suite. It also surprisingly outperforms [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) (the best performing 2B model in benchmarks as of Feb 25, 2024) despite being a model aimed at language adaptation. | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| |[gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha)[📄](https://gist.github.com/abhinand5/559d542437f6b7060fee94cc1f7861f5)| 39.41| 23.38| 58.94| 43.18| 32.14| | [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 | | [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 | | [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 | ## Model description - **Model type:** A 2B parameter GPT-like model finetuned on 100,000 samples consisting of an equal proportion of English and Tamil samples. - **Language(s):** Bilingual. English and Tamil. - **License:** [Google Gemma Terms of Use](https://ai.google.dev/gemma/terms) - **Finetuned from model:** [abhinand/gemma-2b-tamil](https://huggingface.co/abhinand/gemma-2b-tamil) - **Training Precision:** `bfloat16` - **Training Hardware:** 4x Nvidia RTX 3090 GPUs - **Training Cost:** $20 ## Support my work If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated. [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/abhinand.b) ## Prompting Format [Alpaca] **Prompt Template Without Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Response: {response} ``` **Prompt Template With Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Input: {input} ### Response: {response} ``` ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-it-tamil-v0.1-alpha) | Metric |Value| |---------------------------------|----:| |Avg. |47.60| |AI2 Reasoning Challenge (25-Shot)|50.09| |HellaSwag (10-Shot) |71.41| |MMLU (5-Shot) |39.94| |TruthfulQA (0-shot) |42.63| |Winogrande (5-shot) |64.96| |GSM8k (5-shot) |16.60|
abhinand/tamil-llama-13b-instruct-v0.1
abhinand
2024-03-04T12:55:37Z
1,431
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ta", "en", "arxiv:2311.05845", "license:llama2", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-07T10:21:46Z
--- language: - ta - en license: llama2 model-index: - name: tamil-llama-13b-instruct-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 54.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 79.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 50.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.22 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 7.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard --- # Tamil LLaMA 13B Instruct v0.1 Welcome to the inaugural release of the Tamil LLaMA 13B instruct model – an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact. ## Model description The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2. - **Model type:** A 13B parameter GPT-like model fine-tuned on [Tamil-Alpaca-Orca](https://huggingface.co/datasets/abhinand/tamil-alpaca-orca) - a mix of Tamil-translated [Stanford-Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and a subset of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) datasets. - **Language(s):** Tamil and English - **License:** GNU General Public License v3.0 - **Finetuned from model:** [abhinand/tamil-llama-13b-base-v0.1](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) - **Training Precision:** `float16` - **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) ## Prompting Format **Prompt Template Without Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Response: {response} ``` **Prompt Template With Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Input: {input} ### Response: {response} ``` ## Related Models | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) | | Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) | | Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama datasets in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__tamil-llama-13b-instruct-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |51.59| |AI2 Reasoning Challenge (25-Shot)|54.52| |HellaSwag (10-Shot) |79.35| |MMLU (5-Shot) |50.37| |TruthfulQA (0-shot) |41.22| |Winogrande (5-shot) |76.56| |GSM8k (5-shot) | 7.51|
wandb/gemma-7b-zephyr-dpo
wandb
2024-03-04T12:54:13Z
22
2
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:wandb/gemma-7b-zephyr-sft", "base_model:finetune:wandb/gemma-7b-zephyr-sft", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T11:39:50Z
--- license: other library_name: transformers datasets: - HuggingFaceH4/ultrafeedback_binarized base_model: wandb/gemma-7b-zephyr-sft license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms model-index: - name: gemma-7b-zephyr-dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.84 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.48 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tcapelle/gemma-7b-zephyr-dpo name: Open LLM Leaderboard --- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr) # Gemma 7B Zephyr DPO The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) DPO recipe applied on top of SFT finetuned Gemma 7B ## Model description - **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **Finetuned from model:** [wandb/gemma-7b-zephyr-sft](https://huggingface.co/wandb/gemma-7b-zephyr-sft/) ## Recipe We trained using the DPO script in [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py) and logging to W&B Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape) ## License This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms) ## Compute provided by [Lambda Labs](https://lambdalabs.com/) - 8xA100 80GB node # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tcapelle__gemma-7b-zephyr-dpo) | Metric |Value| |---------------------------------|----:| |Avg. |61.62| |AI2 Reasoning Challenge (25-Shot)|60.84| |HellaSwag (10-Shot) |80.44| |MMLU (5-Shot) |60.60| |TruthfulQA (0-shot) |42.48| |Winogrande (5-shot) |75.37| |GSM8k (5-shot) |49.96|
leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512
leewaay
2024-03-04T12:53:24Z
7
1
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-04T08:14:03Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - ko --- # leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512') model = AutoModel.from_pretrained('leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/klue-roberta-large-klueNLI-klueSTS-MSL512) ## Training This model uses the pretrained [`klue/roberta-large`](https://huggingface.co/klue/roberta-large) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli). The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 183 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors [Wonseok Lee](https://github.com/leewaay)
leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75
leewaay
2024-03-04T12:52:08Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-04T11:26:48Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - ko --- # leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75') model = AutoModel.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/klue-roberta-base-klueNLI-klueSTS-MSL75) ## Training This model uses the pretrained [`klue/roberta-base`](https://huggingface.co/klue/roberta-base) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli). The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 183 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors [Wonseok Lee](https://github.com/leewaay)
leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256
leewaay
2024-03-04T12:51:42Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-04T11:29:15Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - ko --- # leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256') model = AutoModel.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/klue-roberta-base-klueNLI-klueSTS-MSL256) ## Training This model uses the pretrained [`klue/roberta-base`](https://huggingface.co/klue/roberta-base) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli). The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 183 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors [Wonseok Lee](https://github.com/leewaay)
leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512
leewaay
2024-03-04T12:51:09Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-04T11:30:46Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - ko --- # leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512') model = AutoModel.from_pretrained('leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/klue-roberta-base-klueNLI-klueSTS-MSL512) ## Training This model uses the pretrained [`klue/roberta-base`](https://huggingface.co/klue/roberta-base) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli). The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 183 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors [Wonseok Lee](https://github.com/leewaay)
fhai50032/xLakeChat
fhai50032
2024-03-04T12:50:58Z
47
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "xDAN-AI/xDAN-L1-Chat-RL-v1", "fhai50032/BeagleLake-7B-Toxic", "base_model:fhai50032/BeagleLake-7B-Toxic", "base_model:merge:fhai50032/BeagleLake-7B-Toxic", "base_model:xDAN-AI/xDAN-L1-Chat-RL-v1", "base_model:merge:xDAN-AI/xDAN-L1-Chat-RL-v1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-11T02:43:39Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - xDAN-AI/xDAN-L1-Chat-RL-v1 - fhai50032/BeagleLake-7B-Toxic base_model: - xDAN-AI/xDAN-L1-Chat-RL-v1 - fhai50032/BeagleLake-7B-Toxic model-index: - name: xLakeChat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/xLakeChat name: Open LLM Leaderboard --- # xLakeChat xLakeChat is a merge of the following models * [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1) * [fhai50032/BeagleLake-7B-Toxic](https://huggingface.co/fhai50032/BeagleLake-7B-Toxic) ## 🧩 Configuration ```yaml models: - model: senseable/WestLake-7B-v2 # no params for base model - model: xDAN-AI/xDAN-L1-Chat-RL-v1 parameters: weight: 0.73 density: 0.64 - model: fhai50032/BeagleLake-7B-Toxic parameters: weight: 0.46 density: 0.55 merge_method: dare_ties base_model: senseable/WestLake-7B-v2 parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fhai50032/xLakeChat" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__xLakeChat) | Metric |Value| |---------------------------------|----:| |Avg. |63.72| |AI2 Reasoning Challenge (25-Shot)|62.37| |HellaSwag (10-Shot) |82.64| |MMLU (5-Shot) |59.32| |TruthfulQA (0-shot) |52.96| |Winogrande (5-shot) |74.74| |GSM8k (5-shot) |50.27|
fhai50032/SamChat
fhai50032
2024-03-04T12:48:29Z
52
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "Intel/neural-chat-7b-v3-3", "cognitivecomputations/samantha-mistral-7b", "base_model:Intel/neural-chat-7b-v3-3", "base_model:merge:Intel/neural-chat-7b-v3-3", "base_model:cognitivecomputations/samantha-mistral-7b", "base_model:merge:cognitivecomputations/samantha-mistral-7b", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-10T23:23:15Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - Intel/neural-chat-7b-v3-3 - cognitivecomputations/samantha-mistral-7b base_model: - Intel/neural-chat-7b-v3-3 - cognitivecomputations/samantha-mistral-7b model-index: - name: SamChat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.2 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.7 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.89 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 41.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/SamChat name: Open LLM Leaderboard --- # SamChat SamChat is a merge of the following models * [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) * [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b) ## 🧩 Configuration ```yaml models: - model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo # no params for base model - model: Intel/neural-chat-7b-v3-3 parameters: weight: 0.55 density: 0.46 - model: cognitivecomputations/samantha-mistral-7b parameters: weight: 0.64 density: 0.55 merge_method: dare_ties base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fhai50032/SamChat" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__SamChat) | Metric |Value| |---------------------------------|----:| |Avg. |61.68| |AI2 Reasoning Challenge (25-Shot)|62.20| |HellaSwag (10-Shot) |81.88| |MMLU (5-Shot) |59.70| |TruthfulQA (0-shot) |52.89| |Winogrande (5-shot) |72.14| |GSM8k (5-shot) |41.24|
juhwanlee/llmdo-Mistral-7B-case-c-v1
juhwanlee
2024-03-04T12:45:54Z
46
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T00:55:40Z
--- language: - en license: apache-2.0 datasets: - Open-Orca/OpenOrca model-index: - name: llmdo-Mistral-7B-case-c-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.82 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_juhwanlee__llmdo-Mistral-7B-case-c-v1) | Metric |Value| |---------------------------------|----:| |Avg. |62.16| |AI2 Reasoning Challenge (25-Shot)|62.03| |HellaSwag (10-Shot) |83.55| |MMLU (5-Shot) |62.69| |TruthfulQA (0-shot) |45.82| |Winogrande (5-shot) |79.08| |GSM8k (5-shot) |39.80|
zjunlp/MolGen-large-opt
zjunlp
2024-03-04T12:44:17Z
81
2
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "molecular language model", "SELFIES", "molecule optimization", "arxiv:2301.11259", "autotrain_compatible", "region:us" ]
text2text-generation
2023-02-19T09:31:54Z
--- tags: - molecular language model - SELFIES - molecule optimization inference: false --- # MolGen-large-opt MolGen-large-opt was introduced in the paper ["Domain-Agnostic Molecular Generation with Self-feedback"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen). ## Model description MolGen-large-opt is the fine-tuned version of [MolGen-large](https://huggingface.co/zjunlp/MolGen-large). MolGen-large is the first pre-trained model that only produces chemically valid molecules. With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms. Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder. Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization. ![image.png](./molgen.png) ## Intended uses You can use the fine-tuned model for molecule optimization for downstream tasks. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you. ### How to use Molecule optimization example: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt") >>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt") >>> sf_input = tokenizer("[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]", return_tensors="pt") >>> # beam search >>> molecules = model.generate(input_ids=sf_input["input_ids"], attention_mask=sf_input["attention_mask"], max_length=35, min_length=5, num_return_sequences=5, num_beams=5) >>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules] ['[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]'] ``` ### BibTeX entry and citation info ```bibtex @inproceedings{fang2023domain, author = {Yin Fang and Ningyu Zhang and Zhuo Chen and Xiaohui Fan and Huajun Chen}, title = {Domain-Agnostic Molecular Generation with Chemical Feedback}, booktitle = {{ICLR}}, publisher = {OpenReview.net}, year = {2024}, url = {https://openreview.net/pdf?id=9rPyHyjfwP} } ```
Stopwolf/DistilabelCerberus-7B-slerp
Stopwolf
2024-03-04T12:42:24Z
60
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "dvilasuero/DistilabelBeagle14-7B", "teknium/OpenHermes-2.5-Mistral-7B", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T11:31:39Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - dvilasuero/DistilabelBeagle14-7B - teknium/OpenHermes-2.5-Mistral-7B model-index: - name: DistilabelCerberus-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.2 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/DistilabelCerberus-7B-slerp name: Open LLM Leaderboard --- # DistilabelCerberus-7B-slerp DistilabelCerberus-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [dvilasuero/DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B) * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: dvilasuero/DistilabelBeagle14-7B layer_range: [0, 32] - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: teknium/OpenHermes-2.5-Mistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Results | | ARC-C | Hellaswag | ThruthfulQA | Winogrande | GSM8K | | | |-----------------------------|-------|-----------|-------------|------------|-------|---|---| | OpenHermes-2.5-Mistral-7B | 61.26 | 65.22 | 52.24 | 78.06 | 26.08 | | | | DistilabelBeagle14-7B | ? | ? | 71.66 | ? | ? | | | | DistilabelCerberus-7B-slerp | 65.44 | 69.29 | 60.93 | 79.48 | 69.82 | | | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__DistilabelCerberus-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |71.56| |AI2 Reasoning Challenge (25-Shot)|68.17| |HellaSwag (10-Shot) |86.78| |MMLU (5-Shot) |64.20| |TruthfulQA (0-shot) |60.93| |Winogrande (5-shot) |79.48| |GSM8k (5-shot) |69.83|
Neomedallion/a2c-PandaPickAndPlace-v3
Neomedallion
2024-03-04T12:41:24Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T12:37:06Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -0.17 +/- 0.07 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
s3nh/Mistral_Sonyichi-7B-slerp
s3nh
2024-03-04T12:39:38Z
55
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "en", "base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "base_model:merge:EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:merge:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Sonya-7B", "base_model:merge:SanjiWatsuki/Sonya-7B", "license:openrail", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T14:43:17Z
--- language: - en license: openrail library_name: transformers tags: - mergekit - merge base_model: - SanjiWatsuki/Sonya-7B - EmbeddedLLM/Mistral-7B-Merge-14-v0.1 - SanjiWatsuki/Kunoichi-7B pipeline_tag: text-generation model-index: - name: Mistral_Sonyichi-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.49 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.43 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.25 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp name: Open LLM Leaderboard --- # merged ![intro](332f1afc-d0b6-47f8-8a1c-b4bdccea94b4.png "Intro") This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: SanjiWatsuki/Sonya-7B - layer_range: [0, 32] model: SanjiWatsuki/Kunoichi-7B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Mistral_Sonyichi-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |70.52| |AI2 Reasoning Challenge (25-Shot)|67.49| |HellaSwag (10-Shot) |86.43| |MMLU (5-Shot) |63.58| |TruthfulQA (0-shot) |63.25| |Winogrande (5-shot) |78.53| |GSM8k (5-shot) |63.84|
s3nh/Severusectum-7B-DPO
s3nh
2024-03-04T12:39:33Z
125
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "en", "base_model:FelixChao/Sectumsempra-7B-DPO", "base_model:merge:FelixChao/Sectumsempra-7B-DPO", "base_model:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T22:51:05Z
--- language: - en license: mit library_name: transformers tags: - mergekit - merge base_model: - FelixChao/Sectumsempra-7B-DPO - FelixChao/WestSeverus-7B-DPO-v2 pipeline_tag: text-generation model-index: - name: Severusectum-7B-DPO results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.5 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 72.45 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO name: Open LLM Leaderboard --- ![image](severus.png) # Severusectum-7B-DPO This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [FelixChao/Sectumsempra-7B-DPO](https://huggingface.co/FelixChao/Sectumsempra-7B-DPO) * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: FelixChao/Sectumsempra-7B-DPO dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: FelixChao/Sectumsempra-7B-DPO - layer_range: [0, 32] model: FelixChao/WestSeverus-7B-DPO-v2 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Severusectum-7B-DPO) | Metric |Value| |---------------------------------|----:| |Avg. |75.18| |AI2 Reasoning Challenge (25-Shot)|71.50| |HellaSwag (10-Shot) |88.55| |MMLU (5-Shot) |64.79| |TruthfulQA (0-shot) |72.45| |Winogrande (5-shot) |83.27| |GSM8k (5-shot) |70.51|
s3nh/Noromaid-Aeryth-7B
s3nh
2024-03-04T12:39:27Z
60
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:NeuralNovel/Aeryth-7B-v0.1", "base_model:merge:NeuralNovel/Aeryth-7B-v0.1", "base_model:NeverSleep/Noromaid-7b-v0.2", "base_model:merge:NeverSleep/Noromaid-7b-v0.2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T08:00:22Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: - NeverSleep/Noromaid-7b-v0.2 - NeuralNovel/Aeryth-7B-v0.1 pipeline_tag: text-generation model-index: - name: Noromaid-Aeryth-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 56.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 78.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 57.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.66 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 71.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 16.76 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard --- # Noromaid-Aeryth-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2) * [NeuralNovel/Aeryth-7B-v0.1](https://huggingface.co/NeuralNovel/Aeryth-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: NeverSleep/Noromaid-7b-v0.2 dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.22, 0.61, 0.46, 0.77, 1.0] - filter: mlp value: [0.78, 0.39, 0.54, 0.23, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: NeuralNovel/Aeryth-7B-v0.1 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B) | Metric |Value| |---------------------------------|----:| |Avg. |57.82| |AI2 Reasoning Challenge (25-Shot)|56.74| |HellaSwag (10-Shot) |78.62| |MMLU (5-Shot) |57.29| |TruthfulQA (0-shot) |65.66| |Winogrande (5-shot) |71.82| |GSM8k (5-shot) |16.76|
s3nh/SeverusWestLake-7B-DPO
s3nh
2024-03-04T12:39:21Z
63
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:FelixChao/Sectumsempra-7B-DPO", "base_model:merge:FelixChao/Sectumsempra-7B-DPO", "base_model:cognitivecomputations/WestLake-7B-v2-laser", "base_model:merge:cognitivecomputations/WestLake-7B-v2-laser", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T20:59:38Z
--- license: mit library_name: transformers tags: - mergekit - merge base_model: - FelixChao/Sectumsempra-7B-DPO - cognitivecomputations/WestLake-7B-v2-laser pipeline_tag: text-generation model-index: - name: SeverusWestLake-7B-DPO results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.18 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.94 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.49 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 86.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/SeverusWestLake-7B-DPO name: Open LLM Leaderboard --- # SeverusWestLake-7B-DPO ![image](west.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [FelixChao/Sectumsempra-7B-DPO](https://huggingface.co/FelixChao/Sectumsempra-7B-DPO) * [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: FelixChao/Sectumsempra-7B-DPO dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: FelixChao/Sectumsempra-7B-DPO - layer_range: [0, 32] model: cognitivecomputations/WestLake-7B-v2-laser ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__SeverusWestLake-7B-DPO) | Metric |Value| |---------------------------------|----:| |Avg. |75.42| |AI2 Reasoning Challenge (25-Shot)|72.18| |HellaSwag (10-Shot) |88.94| |MMLU (5-Shot) |64.65| |TruthfulQA (0-shot) |71.49| |Winogrande (5-shot) |86.11| |GSM8k (5-shot) |69.14|
leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512
leewaay
2024-03-04T12:38:31Z
3
1
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-04T11:51:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - ko --- # leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512') model = AutoModel.from_pretrained('leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=leewaay/kpf-bert-base-klueNLI-klueSTS-MSL512) ## Training This model uses the pretrained [`jinmang2/kpfbert`](https://huggingface.co/jinmang2/kpfbert) model. Please refer to the model card for more detailed information about the pre-training procedure. It continued training on [klue STS data](https://huggingface.co/datasets/klue#sts) after it loaded a model trained on [klue NLI data](https://huggingface.co/datasets/klue#nli). The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 183 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors [Wonseok Lee](https://github.com/leewaay)
chanchan7/vicuna-qlora
chanchan7
2024-03-04T12:34:28Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "alignment-handbook", "generated_from_trainer", "trl", "sft", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:lmsys/vicuna-7b-v1.5", "base_model:adapter:lmsys/vicuna-7b-v1.5", "license:llama2", "4-bit", "bitsandbytes", "region:us" ]
null
2024-03-03T21:26:52Z
--- license: llama2 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k base_model: lmsys/vicuna-7b-v1.5 model-index: - name: vicuna-qlora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vicuna-qlora This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9771 | 1.0 | 570 | 0.9819 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT
ENERGY-DRINK-LOVE
2024-03-04T12:33:42Z
2,246
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:LDCC/LDCC-SOLAR-10.7B", "base_model:finetune:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T09:01:15Z
--- license: cc-by-nc-4.0 base_model: LDCC/LDCC-SOLAR-10.7B tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset. * dataset: leaderboard dataset 1.3 + open-Hermes dataset(ko-mt) ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
axel-rda/ARIA-70B-V2-Quantized
axel-rda
2024-03-04T12:27:21Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-04T12:14:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yanolja/EEVE-Korean-Instruct-2.8B-v1.0
yanolja
2024-03-04T12:26:40Z
2,859
21
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "custom_code", "arxiv:2402.14714", "arxiv:2310.01377", "arxiv:2306.02707", "base_model:yanolja/EEVE-Korean-2.8B-v1.0", "base_model:finetune:yanolja/EEVE-Korean-2.8B-v1.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T04:40:06Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: yanolja/EEVE-Korean-2.8B-v1.0 model-index: - name: yanolja/EEVE-Korean-Instruct-2.8B-v1.0 results: [] --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <p align="left"> <img src="https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/> <p> # EEVE-Korean-Instruct-2.8B-v1.0 ## Join Our Community on Discord! If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m). ## Our Dedicated Team (Alphabetical Order) | Research | Engineering | Product Management | UX Design | |-----------------|-----------------|--------------------|-------------- | Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi | | Seungduk Kim | Rifqi Alfi | | | | Seungtaek Choi | Sanghoon Han | | | | | Suhyun Kang | | | ## About the Model This model is a fine-tuned version of [yanolja/EEVE-Korean-2.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0), which is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714). ## Prompt Template ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: {prompt} Assistant: ``` ## How to Use it ```python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True) prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n" text = '한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.\n\n(A) 경성\n(B) 부산\n(C) 평양\n(D) 서울\n(E) 전주' model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt') outputs = model.generate(**model_inputs, max_new_tokens=256) output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(output_text) ``` ### Example Output ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: 한국의 수도는 어디인가요? 아래 선택지 중 골라주세요. (A) 경성 (B) 부산 (C) 평양 (D) 서울 (E) 전주 Assistant: 한국의 수도는 (D) 서울입니다. 서울은 수도권과 수도권 내의 주요 도시들을 포함하는 광역 행정구역으로, 대한민국의 수도입니다. 서울은 수도권 인구의 약 70%를 차지하며, 대한민국의 경제, 정치, 문화의 중심지입니다. ``` ## Training Data - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) - No other dataset was used ## Citation ``` @misc{kim2024efficient, title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models}, author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong}, year={2024}, eprint={2402.14714}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{cui2023ultrafeedback, title={UltraFeedback: Boosting Language Models with High-quality Feedback}, author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun}, year={2023}, eprint={2310.01377}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{SlimOrcaDedup, title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca}, author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos}, year = {2023}, publisher = {HuggingFace}, url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/} } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__EEVE-Korean-Instruct-2.8B-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |58.71| |AI2 Reasoning Challenge (25-Shot)|58.28| |HellaSwag (10-Shot) |72.42| |MMLU (5-Shot) |53.35| |TruthfulQA (0-shot) |48.32| |Winogrande (5-shot) |74.82| |GSM8k (5-shot) |45.11|
Weyaxi/Draco-8x7B
Weyaxi
2024-03-04T12:25:16Z
75
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "openchat", "hermes", "dolphin", "bagel", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T19:04:50Z
--- license: apache-2.0 tags: - moe - openchat - hermes - dolphin - bagel model-index: - name: Draco-8x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.65 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/VWIJplnya5L7wmGxK4Lut.jpeg) # 💫 Draco-8x7B This is the model for Draco-8x7B. I used [this repo](https://bit.ly/weyaxi-moe-repo) to make this MOE model. This model's experts are not using any merged models. # 📚 Other branches (Number of Experts Per Token) Other branches that this repository contains differ only slightly (from a git diff perspective) in terms of the number of experts per token. Usually, a higher value for the number of experts per token will result in better performance, but it may also lead to increased inference time. | Number of experts per token | Link of the branch | | ---------------------------- | -------------------------------------------------------------------------------------------| | 2 | [Main](https://huggingface.co/Weyaxi/Draco-8x7B/tree/main) | | 3 | [3-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/3-experts-per-token) | | 4 | [4-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/4-experts-per-token) | | 6 | [6-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/6-experts-per-token) | | 8 | [8-experts-per-token](https://huggingface.co/Weyaxi/Draco-8x7B/tree/8-experts-per-token) | # 💬 Prompt Template(s): This model includes many models, so providing only one prompt template is not enough. You can use and try these prompt templates and decide which works best for you. **Note:** The current chat template in the tokenizer config is set to [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)'s chat template. **Note 2:** It is also important to note that [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) is using many prompt templates other than I provided. You can visit [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) to learn more about this templates. ### GPT4 Correct Used in [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106), [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) ``` GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|> ``` ### ChatML: Used in [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B), [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1), [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser), [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` ### Math Alpaca Used in [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: Let's think step by step. ``` # 🛠️ Yaml Config <details><summary>See config</summary> ```yaml base_model: openchat/openchat-3.5-0106 gate_mode: hidden dtype: bfloat16 experts: - source_model: openchat/openchat-3.5-0106 positive_prompts: # General (Mistral finetune) - "chat" - "assistant" - "tell me" - "explain" - source_model: teknium/OpenHermes-2.5-Mistral-7B positive_prompts: # General (Mistral finetune) - "interact" - "converse" - "respond" - "express" - source_model: jondurbin/bagel-dpo-7b-v0.1 positive_prompts: # Science (Mistral finetune) - "science" - "biology" - "chemistry" - "physics" - "Newton's laws" - "scientific method" - "periodic table" - "photosynthesis process" - source_model: meta-math/MetaMath-Mistral-7B positive_prompts: # Math (Mistral finetune) - "reason" - "math" - "mathematics" - "solve" - "count" - source_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser positive_prompts: # Uncensored (Mistral finetune) - "dolphin" - "uncensored" - "unbiased" - "unfiltered" - "unrestricted" - "offensive" - source_model: beowolx/CodeNinja-1.0-OpenChat-7B positive_prompts: # Code (openchat-3.5-1210 finetune) - "code" - "script" - "python" - "javascript" - "programming" - "algorithm" - source_model: senseable/WestLake-7B-v2 positive_prompts: # Roleplay (Unknown finetune) - "storywriting" - "write" - "scene" - "story" - "character" - "act as" - "you are" - source_model: snorkelai/Snorkel-Mistral-PairRM-DPO positive_prompts: # Question Answering (? Mistral-7B-Instruct-v0.2 finetune ?) - "what happens" - "what is" - "what can" - "why" - "who" - "can a" ``` </details><br> # 🔄 Quantizationed versions Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke). ##### GPTQ - [TheBloke/Draco-8x7B-GPTQ](https://huggingface.co/TheBloke/Draco-8x7B-GPTQ) ##### GGUF - [TheBloke/Draco-8x7B-GGUF](https://huggingface.co/TheBloke/Draco-8x7B-GGUF) ##### AWQ - [TheBloke/Draco-8x7B-AWQ](https://huggingface.co/TheBloke/Draco-8x7B-AWQ) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Draco-8x7B) | Metric |Value| |---------------------------------|----:| |Avg. |70.89| |AI2 Reasoning Challenge (25-Shot)|65.02| |HellaSwag (10-Shot) |85.24| |MMLU (5-Shot) |64.96| |TruthfulQA (0-shot) |62.65| |Winogrande (5-shot) |80.66| |GSM8k (5-shot) |66.79|
lodrick-the-lafted/Hermes-Instruct-7B-100K
lodrick-the-lafted
2024-03-04T12:24:50Z
123
3
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:lodrick-the-lafted/Hermes-100K", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T03:04:10Z
--- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-100K model-index: - name: Hermes-Instruct-7B-100K results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.84 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.97 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K/resolve/main/hermes-instruct.png> # Hermes-Instruct-7B-v0.2 [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with 100K rows of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format. <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Hermes-Instruct-7B-100K" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Hermes-Instruct-7B-100K) | Metric |Value| |---------------------------------|----:| |Avg. |64.96| |AI2 Reasoning Challenge (25-Shot)|61.52| |HellaSwag (10-Shot) |82.84| |MMLU (5-Shot) |60.95| |TruthfulQA (0-shot) |63.62| |Winogrande (5-shot) |76.87| |GSM8k (5-shot) |43.97|
lodrick-the-lafted/Platyboros-Instruct-7B
lodrick-the-lafted
2024-03-04T12:24:31Z
95
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:garage-bAInd/Open-Platypus", "dataset:jondurbin/airoboros-3.2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-21T22:05:14Z
--- license: apache-2.0 datasets: - garage-bAInd/Open-Platypus - jondurbin/airoboros-3.2 model-index: - name: Platyboros-Instruct-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B/resolve/main/platyboros.png> # Platyboros-Instruct-7B [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with [jondurbin/airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) and [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), in Alpaca format. <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Platyboros-Instruct-7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Platyboros-Instruct-7B) | Metric |Value| |---------------------------------|----:| |Avg. |64.19| |AI2 Reasoning Challenge (25-Shot)|57.76| |HellaSwag (10-Shot) |82.59| |MMLU (5-Shot) |62.05| |TruthfulQA (0-shot) |60.92| |Winogrande (5-shot) |78.14| |GSM8k (5-shot) |43.67|
lodrick-the-lafted/Hermes-Instruct-7B-v0.2
lodrick-the-lafted
2024-03-04T12:23:36Z
98
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:lodrick-the-lafted/Hermes-40K", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-11T08:49:12Z
--- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-40K model-index: - name: Hermes-Instruct-7B-v0.2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 41.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2 name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2/resolve/main/hermes-instruct.png> # Hermes-Instruct-7B-v0.2 [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with some of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format. <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer defaults to Mistral-style. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Hermes-Instruct-7B-v0.2" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Hermes-Instruct-7B-v0.2) | Metric |Value| |---------------------------------|----:| |Avg. |63.82| |AI2 Reasoning Challenge (25-Shot)|60.92| |HellaSwag (10-Shot) |82.96| |MMLU (5-Shot) |60.05| |TruthfulQA (0-shot) |61.01| |Winogrande (5-shot) |76.87| |GSM8k (5-shot) |41.09|
lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B
lodrick-the-lafted
2024-03-04T12:23:04Z
52
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "dataset:lodrick-the-lafted/Hermes-40K", "dataset:garage-bAInd/Open-Platypus", "dataset:jondurbin/airoboros-3.2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-02T10:12:08Z
--- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-40K - garage-bAInd/Open-Platypus - jondurbin/airoboros-3.2 model-index: - name: Grafted-Hermetic-Platypus-D-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.02 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-D-2x7B/resolve/main/ghp.png> # Grafted-Hermetic-Platypus-D-2x7B MoE merge of - [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B) - [Hermes-Instruct-7B-v0.2](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2) Basically the same thing as GHP-A. <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an pumpkin pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-D-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |64.24| |AI2 Reasoning Challenge (25-Shot)|58.87| |HellaSwag (10-Shot) |82.89| |MMLU (5-Shot) |61.96| |TruthfulQA (0-shot) |61.02| |Winogrande (5-shot) |77.43| |GSM8k (5-shot) |43.29|
apatidar0/t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20
apatidar0
2024-03-04T12:18:12Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "summarization dutch for keyword extraction from new", "generated_from_trainer", "base_model:yhavinga/t5-base-dutch", "base_model:finetune:yhavinga/t5-base-dutch", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T09:38:20Z
--- license: apache-2.0 base_model: yhavinga/t5-base-dutch tags: - summarization dutch for keyword extraction from new - generated_from_trainer model-index: - name: t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-dutch-finetuned-mt5_base_keyword_extraction_dutch_batch_20 This model is a fine-tuned version of [yhavinga/t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4839 - eval_rouge1: 0.8041 - eval_rouge2: 0.6483 - eval_rougeL: 0.7937 - eval_rougeLsum: 0.7939 - eval_runtime: 141.5673 - eval_samples_per_second: 6.435 - eval_steps_per_second: 0.537 - epoch: 18.0 - step: 3672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
vicgalle/solarized-18B-dpo
vicgalle
2024-03-04T12:16:36Z
580
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "18B", "merge", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:vicgalle/franken-SOLAR-18B-v1.0", "base_model:finetune:vicgalle/franken-SOLAR-18B-v1.0", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-18T15:54:39Z
--- license: apache-2.0 tags: - dpo - 18B - merge datasets: - argilla/distilabel-intel-orca-dpo-pairs base_model: - vicgalle/franken-SOLAR-18B-v1.0 model-index: - name: solarized-18B-dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.49 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 40.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-18B-dpo name: Open LLM Leaderboard --- # solarized-18B-dpo DPO'd from vicgalle/franken-SOLAR-18B-v1.0, a SOLAR-like model upscaled to 18B. It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/rNtaTqTKrAoN5-C5DuPgu.png) ## Sample generations **Generate a movie review about the Pokémon 1 movie, focusing on BDSM.** > Title: "The Erotic Descent into Team Rocket's Lair: A BDS/P Perspective of Pokémon: The First Movie" > As an aficionado of bondage, dominance and sadomasochism (BDSM) literature, I was excited to experience the classic 1998 film "Pokémon: The First Movie" through a BDSM lens. Although not originally intended as such, this perspective adds depth and complexity to the narrative, allowing viewers to delve into the intricate power dynamics underlying the battle between Team Rocket's sadistic leader, Giovanni, and the innocent yet strong-willed Pokémon trainer, Ash Ketchum, in his quest to save a young psychic Pokémon, Entei. > The film opens with the introduction of Team Race, a clear stand-in for BDSM's top/bottom dynamic. Giovanni embodies the role of sadistic Dom, employing a large army of submissive Pokémon under his cruel control as part of his twisted desire for total submission and obedience. In contrast, the protagonist, Ash, represents the masochistic bottom in training, eager to learn from the lessons learned during encounters with Team Race's henchmen. > Ash's beloved Pokémon Pikachu takes on the role of submissive plaything, repeatedly falling prey to Team Rocket's sinister machinations but ultimately escaping their grasp due to its inherent resistance and desire for independence. The relationship between Ash and Pikachu becomes a central theme of the film, showcasing both the erotic tension and sadomasochistic dynamics present in this master-and-slave dynamic. > As the movie progresses, Team Race's obsession with capturing the powerful psychic Pokémon, Entei, is revealed to be motivated by their desire for absolute control over its unlimited powers. Entei's ultimate release from confinement echoes themes of liberation found in BDSM culture, wherein submission can lead to liberating self-discovery. > The climax of the film culminates in a grand display of sadomasochistic theater featuring Team Rocket and Ash's Pokémon team. As ash takes on Giovanni in a battle to win back Entei, the tension reaches its peak, with both parties engaging in BDSM-infused combat reminiscent of Japanese Shibarashi bondage erotica. > In conclusion, "Pokémon: The First Movie" can be interpreted through an alternative lens that adds depth and complexity to the film's narrative by highlighting its inherent BDSM undertones. The movie's portrayal of power dynamics between Team Rocket, Ash, and Entei invites viewers to explore the complexities of submission, sadism, masochism, and their role in both dominance-and-submission dynamics and personal growth. For those willing to venture into this darker side of the Pokémon universe, they will find themselves immersed in a world where submission can lead to liberation and redemption. > Rating: R (for explicit BDSM themes, nude skinned Pokémon, strong language, sexual content, violence, and some drug use) > Score: 4/5 whips, for its captivating exploration of BDSM tropes in this classic film. ``` ### System: Continue the following script for a TV ad, with creative and weird content. Be like a professional writer ### User: Ziel, a tall, slender Drow woman in a black silk dress enters, her blood-red eyes fixed on Arslan. She says with an excited voice, "Oh, you sweet little thing.". She chuckles, though her smile does not reach her eyes. "Look at you. You're just so precious.". The elf walks over and lifts his chin, her sharp nails prickling Arslan's skin. "Human. You belong to me now. Legally. Nobody will come for you, after the sum I paid.". She tries to look composed, but her ragged breathing and trembling hands give away her excitement. "W-we're going to have some wonderful years together, you and I". ``` > (Voiceover: Energetic and dramatic tone) Scene cuts to an office setting where Arslan sits nervously at a desk, surrounded by various documents and contracts, while Ziel smirks from her chair in the background. > Arslan (voiceover): "And then, it happened." > (Fast-cut to an action shot of Ziel's hand signing one of those documents) Ziel (off-screen): "All done, my pretty little asset." > (Back to Arlan's anxious face) Arslan (voiceover): "I had no idea what I had just signed away." > (Scene fades into a futuristic cityscape filled with neon lights and towering skyscrapers) > (SFX: Synth music swells in the background) > Ziel (Voiceover): "You're gonna love your new life, under my thumb." > (We pan to Arlan walking on a rooftop in his stylish suit and tie while a group of Ziel's minions follow from behind. They are all wearing masks with the same demonic smiles) > Arslan (voiceover): "But then, I discovered something strange." > (Scene cuts to Arslan discovering hidden clauses in the contracts stating, 'Sold to the highest bidder' and 'Forever bound to Ziel, for whatever her twisted heart desires.') > (Cut to close-up of a distraught Arlan) > Aslan (voiceover): "It turns out I was just a pawn in her game." > (We see a montage of the scenes where Ziel laughs sinisterly, her minions watching over Arslan and then back to an anxious human.) > Ziel (Voiceover): "Oh how the roles have changed. You should've just run away when you could." > (SFX: Evil cackling) > Aslan (voiceover, determined tone): "But guess what? I won't let her win. I will find a loophole in those twisted contracts, and no amount of money can buy back your freedom." > (Final shot is Arslan smashing the contract to pieces as he looks up with determination in his eyes.) > End credits: "Fight for your freedom. No matter what it takes. Free yourself from the chains of tyranny." > (SFX: Heroic music swells, followed by a clap of thunder and lightning strike) > Voiceover (closing tagline): "Break free with Freedom Fighters. Fight for your freedom." > (Scene fades out to the company logo appearing on screen accompanied by bright lights radiating from it.) > Note: The script must contain elements of fantasy and a hint of humor, making the ad appealing and engaging. Also, remember that this ad targets both humans and non-human species who might resonate with the situation portrayed in the ad. ### Prompt template The chat template is included in tokenizer_config.json. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__solarized-18B-dpo) | Metric |Value| |---------------------------------|----:| |Avg. |67.88| |AI2 Reasoning Challenge (25-Shot)|68.34| |HellaSwag (10-Shot) |87.79| |MMLU (5-Shot) |63.89| |TruthfulQA (0-shot) |66.49| |Winogrande (5-shot) |80.51| |GSM8k (5-shot) |40.26|
KapitalK/course_eval_gpt2_large_20epoch
KapitalK
2024-03-04T12:16:28Z
4
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai-community/gpt2-large", "base_model:adapter:openai-community/gpt2-large", "region:us" ]
null
2024-03-04T12:16:22Z
--- library_name: peft base_model: openai-community/gpt2-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
kyujinpy/Sakura-SOLAR-Instruct
kyujinpy
2024-03-04T12:15:57Z
4,768
28
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "en", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-24T10:30:56Z
--- language: - en license: cc-by-nc-sa-4.0 tags: - merge pipeline_tag: text-generation model-index: - name: Sakura-SOLAR-Instruct results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.79 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.2 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct name: Open LLM Leaderboard --- # **Sakura-SOLAR-Instruct** <img src='./sakura.png' width=512> **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Method** Using [Mergekit](https://github.com/cg123/mergekit). I shared the information about my model. (training and code) **Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).** **Blog** - [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122). # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 | | Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 | | [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20 > Rank1 2023.12.27 PM 11:50 # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Sakura-SOLAR-Instruct" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct) | Metric |Value| |---------------------------------|----:| |Avg. |74.40| |AI2 Reasoning Challenge (25-Shot)|70.99| |HellaSwag (10-Shot) |88.42| |MMLU (5-Shot) |66.33| |TruthfulQA (0-shot) |71.79| |Winogrande (5-shot) |83.66| |GSM8k (5-shot) |65.20|
kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
kyujinpy
2024-03-04T12:15:30Z
3,596
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/distilabel-math-preference-dpo", "dataset:kyujinpy/orca_math_dpo", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T11:16:37Z
--- language: - en license: cc-by-nc-sa-4.0 datasets: - Intel/orca_dpo_pairs - argilla/distilabel-math-preference-dpo - kyujinpy/orca_math_dpo pipeline_tag: text-generation model-index: - name: Sakura-SOLRCA-Math-Instruct-DPO-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.25 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 72.12 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 name: Open LLM Leaderboard --- # **Sakura-SOLRCA-Math-Instruct-DPO-v1** <img src='./sakura.png' width=512> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Method** Using DPO method. With [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) and [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo). I shared the merge version [kyujinpy/orca_math_dpo](https://huggingface.co/datasets/kyujinpy/orca_math_dpo). I will share the information about my model. (training and code) Please see: ⭐[Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO). # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | Sakura-SOLRCA-Math-Instruct-DPO-v2 | 74.17 | 71.25 | 88.52 | 66.13 | 72.16 | 83.03 | 63.91 | | Sakura-SOLRCA-Math-Instruct-DPO-v1 | 74.13 | 71.25 | 88.48 | 66.21 | 72.12 | 82.87 | 63.84 | | Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 | | Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 | | [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLRCA-Math-Instruct-DPO-v1) | Metric |Value| |---------------------------------|----:| |Avg. |74.13| |AI2 Reasoning Challenge (25-Shot)|71.25| |HellaSwag (10-Shot) |88.48| |MMLU (5-Shot) |66.21| |TruthfulQA (0-shot) |72.12| |Winogrande (5-shot) |82.87| |GSM8k (5-shot) |63.84|
vicgalle/NeuralBeagle-11B-truthy
vicgalle
2024-03-04T12:14:53Z
51
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "dataset:jondurbin/truthy-dpo-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-11T17:52:37Z
--- license: apache-2.0 tags: - merge datasets: - jondurbin/truthy-dpo-v0.1 model-index: - name: NeuralBeagle-11B-truthy results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.86 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 75.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B-truthy name: Open LLM Leaderboard --- # NeuralBeagle-11B DPO'd from vicgalle/franken-Beagle-11B, a Beagle-like model upscaled to 11B. It is a frankenmerge model created using mergekit. Then, we applied DPO over a high-quality preference dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/6u4L-v7GHZWSJq2CT40TS.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__NeuralBeagle-11B-truthy) | Metric |Value| |---------------------------------|----:| |Avg. |72.06| |AI2 Reasoning Challenge (25-Shot)|73.63| |HellaSwag (10-Shot) |87.86| |MMLU (5-Shot) |63.11| |TruthfulQA (0-shot) |75.92| |Winogrande (5-shot) |82.08| |GSM8k (5-shot) |49.73|
vicgalle/OpenHermes-Qwen1.5-1.8B
vicgalle
2024-03-04T12:14:25Z
52
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "dataset:vicgalle/OpenHermesPreferences-1k", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T21:58:44Z
--- license: apache-2.0 library_name: transformers datasets: - vicgalle/OpenHermesPreferences-1k model-index: - name: OpenHermes-Qwen1.5-1.8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 37.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 59.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 45.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.28 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 23.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenHermes-Qwen1.5-1.8B name: Open LLM Leaderboard --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenHermes-Qwen1.5-1.8B) | Metric |Value| |---------------------------------|----:| |Avg. |44.95| |AI2 Reasoning Challenge (25-Shot)|37.80| |HellaSwag (10-Shot) |59.73| |MMLU (5-Shot) |45.80| |TruthfulQA (0-shot) |42.28| |Winogrande (5-shot) |60.22| |GSM8k (5-shot) |23.88|
hon9kon9ize/CantoneseLLM-6B-preview202402
hon9kon9ize
2024-03-04T12:13:55Z
125
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "yue", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T21:26:40Z
--- language: - yue license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: CantoneseLLM-6B-preview202402 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 55.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 75.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.71 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard --- # CantoneseLLM This model is further pre-trained model based on [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) with 800M tokens of Cantonese text compiled from various sources, including translated zh-yue Wikipedia, translated RTHK news [datasets/jed351/rthk_news](https://huggingface.co/datasets/jed351/rthk_news), Cantonese filtered CC100 and Cantonese textbooks generated by Gemini Pro. This is a preview version, for experimental use only, we will use it to fine-tune on downstream tasks and evaluate the performance. ### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hon9kon9ize__CantoneseLLM-6B-preview202402) | Metric | Value | |-----------------------|---------------------------| | Avg. | 56.93 | | ARC (25-shot) | 55.63 | | HellaSwag (10-shot) | 75.8 | | MMLU (5-shot) | 63.07 | | TruthfulQA (0-shot) | 42.26 | | Winogrande (5-shot) | 74.11 | | GSM8K (5-shot) | 30.71 | ## Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("hon9kon9ize/CantoneseLLM-6B-preview202402") model = AutoModelForMaskedLM.from_pretrained("hon9kon9ize/CantoneseLLM-6B-preview202402") prompt = "歷經三年疫情,望穿秋水終於全面復常,隨住各項防疫措施陸續放寬以至取消,香港" input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0') output = model.generate(input_ids, max_length=max_length, num_return_sequences=1, repetition_penalty=1.1, do_sample=True, temperature=temperature, top_k=50, top_p=0.95) output = tokenizer.decode(output[0], skip_special_tokens=True) # output: 歷經三年疫情,望穿秋水終於全面復常,隨住各項防疫措施陸續放寬以至取消,香港旅遊業可謂「起死回生」。 # 不過,旅遊業嘅復蘇之路並唔順利,香港遊客數量仍然遠低於疫前水平,而海外旅客亦只係恢復到疫情前約一半。有業界人士認為,當局需要進一步放寬入境檢疫措施,吸引更多國際旅客來港,令旅遊業得以真正復甦。 ``` ## Limitation and Bias The model is intended to use for Cantonese language understanding and generation tasks, it may not be suitable for other Chinese languages. The model is trained on a diverse range of Cantonese text, including news, Wikipedia, and textbooks, it may not be suitable for informal or dialectal Cantonese, it may contain bias and misinformation, please use it with caution. We found the model is not well trained on the updated Hong Kong knowledge, it may due to the corpus is not large enough to brainwash the original model. We will continue to improve the model and corpus in the future.
vicgalle/zephyr-7b-truthy
vicgalle
2024-03-04T12:13:48Z
114
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:jondurbin/truthy-dpo-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-10T13:48:48Z
--- license: apache-2.0 library_name: transformers datasets: - jondurbin/truthy-dpo-v0.1 model-index: - name: zephyr-7b-truthy results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.31 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 25.47 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy name: Open LLM Leaderboard --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy) | Metric |Value| |---------------------------------|----:| |Avg. |61.93| |AI2 Reasoning Challenge (25-Shot)|60.75| |HellaSwag (10-Shot) |84.64| |MMLU (5-Shot) |59.53| |TruthfulQA (0-shot) |63.31| |Winogrande (5-shot) |77.90| |GSM8k (5-shot) |25.47|
ajibawa-2023/Code-13B
ajibawa-2023
2024-03-04T12:13:37Z
1,508
13
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-74k-ShareGPT", "license:cc-by-nc-nd-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T18:41:07Z
--- language: - en license: cc-by-nc-nd-4.0 tags: - code datasets: - ajibawa-2023/Code-74k-ShareGPT model-index: - name: Code-13B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.34 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 53.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.46 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 19.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-13B name: Open LLM Leaderboard --- **Code-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 74000 set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation is used for training purpose. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT). This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 42 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-13B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** 1. Navier-Stokes Equation Solver ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jDvZDe3QdMj42ZsGbw1TU.png) 2. KSC Complexity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/K6ePWQElIfOROeQE5RIgK.png) 3. GO ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/JFnzijyBqtkQJZyUCBrw0.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Code-13B) | Metric |Value| |---------------------------------|----:| |Avg. |54.81| |AI2 Reasoning Challenge (25-Shot)|57.34| |HellaSwag (10-Shot) |83.28| |MMLU (5-Shot) |53.17| |TruthfulQA (0-shot) |42.46| |Winogrande (5-shot) |73.56| |GSM8k (5-shot) |19.03|
yujiepan/falcon-new-tiny-random
yujiepan
2024-03-04T12:13:24Z
14
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T12:11:38Z
--- pipeline_tag: text-generation inference: true widget: - text: 'Hello!' example_title: Hello world group: Python library_name: transformers --- This model is randomly initialized, using the config from [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) but with smaller size. Note: - The model uses "new architecture" in Falcon-40b. - The model is in float16. Codes: ```python import transformers from optimum.intel.openvino import OVModelForCausalLM import torch import os from huggingface_hub import create_repo, upload_folder source_model_id = 'tiiuae/falcon-40b' save_path = '/tmp/yujiepan/falcon-new-tiny-random' repo_id = 'yujiepan/falcon-new-tiny-random' config = transformers.AutoConfig.from_pretrained( source_model_id, trust_remote_code=True) config.hidden_size = 8 config.num_attention_heads = 2 config.num_hidden_layers = 2 config.torch_dtype = torch.float16 model = transformers.AutoModelForCausalLM.from_config( config, trust_remote_code=True) model = model.half() model.save_pretrained(save_path) tokenizer = transformers.AutoTokenizer.from_pretrained( source_model_id, trust_remote_code=True) tokenizer.save_pretrained(save_path) # current not supported, might add this later # ovmodel = OVModelForCausalLM.from_pretrained( # save_path, export=True, trust_remote_code=True) # ovmodel.save_pretrained(save_path) os.system(f'ls -alh {save_path}') create_repo(repo_id, exist_ok=True) upload_folder(repo_id=repo_id, folder_path=save_path) ```
vicgalle/franken-SOLAR-18B-v1.0
vicgalle
2024-03-04T12:12:55Z
65
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "solar", "conversational", "base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:merge:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:upstage/SOLAR-10.7B-Instruct-v1.0", "base_model:merge:upstage/SOLAR-10.7B-Instruct-v1.0", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-13T18:30:07Z
--- license: apache-2.0 tags: - mergekit - merge - solar base_model: - upstage/SOLAR-10.7B-Instruct-v1.0 - NousResearch/Nous-Hermes-2-SOLAR-10.7B model-index: - name: franken-SOLAR-18B-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.53 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.14 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0 name: Open LLM Leaderboard --- # vicgalle/franken-SOLAR-18B-v1.0 This is a SOLAR-like model upscaled to 18B. It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/mMyHMuuftG71_o4at5suy.png) Evaluations coming soon! This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing. Quantized GGUF variants here https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0-GGUF ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) * [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B layer_range: [0, 12] - sources: - model: upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [6, 18] - sources: - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B layer_range: [13, 25] - sources: - model: upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [19, 31] - sources: - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B layer_range: [26, 38] - sources: - model: upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [32, 44] - sources: - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B layer_range: [39, 48] merge_method: passthrough dtype: float16 ``` ### Usage You can use the provided template: ``` tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0") model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True) conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ] prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8) output_text = tokenizer.decode(outputs[0]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__franken-SOLAR-18B-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |67.03| |AI2 Reasoning Challenge (25-Shot)|65.53| |HellaSwag (10-Shot) |86.45| |MMLU (5-Shot) |63.72| |TruthfulQA (0-shot) |62.14| |Winogrande (5-shot) |78.53| |GSM8k (5-shot) |45.79|
ajibawa-2023/SlimOrca-13B
ajibawa-2023
2024-03-04T12:12:50Z
1,498
11
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:Open-Orca/SlimOrca", "dataset:ajibawa-2023/SlimOrca-ShareGPT", "license:cc-by-nc-nd-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-27T17:56:25Z
--- language: - en license: cc-by-nc-nd-4.0 datasets: - Open-Orca/SlimOrca - ajibawa-2023/SlimOrca-ShareGPT model-index: - name: SlimOrca-13B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.4 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 57.04 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 49.37 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B name: Open LLM Leaderboard --- **SlimOrca-13B: A General Purpose Intelligent Model** This Model is trained on refined version of SlimOrca made available by [Open-Orca](https://huggingface.co/Open-Orca) team. The idea was to check how this Model will perform in the absence of "system" prompt/instruction. This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis, Context & Hypothesis, Reviews, Erotic story generation etc. It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you generate. It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this [data](https://huggingface.co/datasets/ajibawa-2023/SlimOrca-ShareGPT). All the credit goes to the Open-Orca team for releasing SlimOrca dataset. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose. Entire data is trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGML & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-AWQ) Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available. **Example Prompt:** ``` This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Example 1 ![Example 1](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/hM_EJaSZiMjMQU35EiHGM.png) Example 2 ![Example 2](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/riNaxJeTWdCEE4dNP8GWp.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B) | Metric |Value| |---------------------------------|----:| |Avg. |60.39| |AI2 Reasoning Challenge (25-Shot)|60.15| |HellaSwag (10-Shot) |81.40| |MMLU (5-Shot) |57.04| |TruthfulQA (0-shot) |49.37| |Winogrande (5-shot) |74.43| |GSM8k (5-shot) |39.95|
AbacusResearch/haLLawa4-7b
AbacusResearch
2024-03-04T12:12:37Z
53
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/Monarch-7B", "paulml/OGNO-7B", "AbacusResearch/haLLAwa3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T17:59:26Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mlabonne/Monarch-7B - paulml/OGNO-7B - AbacusResearch/haLLAwa3 model-index: - name: haLLawa4-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.5 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.36 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLawa4-7b name: Open LLM Leaderboard --- # haLLawa4-7b haLLawa4-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) * [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B) * [AbacusResearch/haLLAwa3](https://huggingface.co/AbacusResearch/haLLAwa3) ## 🧩 Configuration \```yaml models: - model: eren23/ogno-monarch-jaskier-merge-7b # No parameters necessary for base model - model: mlabonne/Monarch-7B #Emphasize the beginning of Vicuna format models parameters: weight: 0.5 density: 0.59 - model: paulml/OGNO-7B parameters: weight: 0.2 density: 0.55 # Vicuna format - model: AbacusResearch/haLLAwa3 parameters: weight: 0.3 density: 0.55 merge_method: dare_ties base_model: eren23/ogno-monarch-jaskier-merge-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 \``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AbacusResearch__haLLawa4-7b) | Metric |Value| |---------------------------------|----:| |Avg. |75.25| |AI2 Reasoning Challenge (25-Shot)|71.50| |HellaSwag (10-Shot) |88.36| |MMLU (5-Shot) |64.49| |TruthfulQA (0-shot) |74.27| |Winogrande (5-shot) |82.40| |GSM8k (5-shot) |70.51|
kyujinpy/PlatYi-34B-LoRA
kyujinpy
2024-03-04T12:12:09Z
1,443
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-01T16:30:31Z
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation model-index: - name: PlatYi-34B-LoRA results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 78.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.32 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 40.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-LoRA name: Open LLM Leaderboard --- # **PlatYi-34B-LoRA** <img src='./PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** PlatYi-34B-LoRA is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). **Notice** While training, I used LoRA. The `lora_r` values is 16. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | PlatYi-34B-Q | 69.86 | 66.89 | 85.14 | 77.66 | 53.03 | 82.48 | 53.98 | | **PlatYi-34B-LoRA** | 68.1 | 67.15 | 85.37 | 78.46 | 53.32 | 83.66 | 40.64 | | [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/PlatYi-34B-LoRA" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-LoRA) | Metric |Value| |---------------------------------|----:| |Avg. |68.10| |AI2 Reasoning Challenge (25-Shot)|67.15| |HellaSwag (10-Shot) |85.37| |MMLU (5-Shot) |78.46| |TruthfulQA (0-shot) |53.32| |Winogrande (5-shot) |83.66| |GSM8k (5-shot) |40.64|
kyujinpy/PlatYi-34B-Llama-Q-v3
kyujinpy
2024-03-04T12:12:06Z
1,448
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-11T14:57:14Z
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation model-index: - name: PlatYi-34B-Llama-Q-v3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 74.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.8 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 6.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q-v3 name: Open LLM Leaderboard --- # **PlatYi-34B-Llama-Q-v3** <img src='./PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** PlatYi-34B-Llama-Q-v3 is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ## Fix some bugs - Before model, there is some mistakes. - I modified the templates and warmup_steps. ## Notice While training, I used Q-LoRA. The lora_r values is 64. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | PlatYi-34B-Llama-Q-v3 | 61.15 | 64.33 | 84.88 | 74.98 | 51.80 | 82.79 | 6.67 | | PlatYi-34B-Llama-Q-v2 | 67.88 | 61.09 | 85.09 | 76.59 | 52.65 | 82.79 | 49.05 | | PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 | | PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 | | [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 | | [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/PlatYi-34B-Llama-Q-v3" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-Llama-Q-v3) | Metric |Value| |---------------------------------|----:| |Avg. |61.15| |AI2 Reasoning Challenge (25-Shot)|64.33| |HellaSwag (10-Shot) |84.88| |MMLU (5-Shot) |74.98| |TruthfulQA (0-shot) |51.80| |Winogrande (5-shot) |84.21| |GSM8k (5-shot) | 6.67|
ajibawa-2023/Code-290k-13B
ajibawa-2023
2024-03-04T12:11:56Z
1,371
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:cc-by-nc-nd-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-16T17:35:23Z
--- language: - en license: cc-by-nc-nd-4.0 tags: - code datasets: - ajibawa-2023/Code-290k-ShareGPT model-index: - name: Code-290k-13B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 56.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 51.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.65 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 17.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-13B name: Open LLM Leaderboard --- **Code-290k-13B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around **290000** set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. It is built upon using my existing Datasets [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT) and [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) . This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ, GGUF, AWQ & Exllama** GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ) Exllama v2: [Link](https://huggingface.co/bartowski/Code-290k-13B-exl2) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) and [Bartowski](https://huggingface.co/bartowski) for making Quantized versions of the model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Will update soon. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Code-290k-13B) | Metric |Value| |---------------------------------|----:| |Avg. |52.96| |AI2 Reasoning Challenge (25-Shot)|56.06| |HellaSwag (10-Shot) |81.55| |MMLU (5-Shot) |51.99| |TruthfulQA (0-shot) |37.65| |Winogrande (5-shot) |72.69| |GSM8k (5-shot) |17.82|
SteelStorage/Etheria-55b-v0.1
SteelStorage
2024-03-04T12:11:55Z
25
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "Etheria", "arxiv:2311.03099", "arxiv:2306.01708", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T11:18:25Z
--- license: apache-2.0 tags: - mergekit - Etheria base_model: [] model-index: - name: Etheria-55b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.1 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 73.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.16 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 35.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1 name: Open LLM Leaderboard --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### recommended settings and Prompt Format: Ive tested it up to 32k context using exl2 using these settings: ``` "temp": 0.7, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.1, "rep_pen": 1.1, "rep_pen_range": 8192, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": false, "truncation_length": 2048, "ban_eos_token": true, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, ``` Prompt format that work well ``` ChatML & Alpaca ``` ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Steelskull__Etheria-55b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |64.69| |AI2 Reasoning Challenge (25-Shot)|65.10| |HellaSwag (10-Shot) |81.93| |MMLU (5-Shot) |73.66| |TruthfulQA (0-shot) |56.16| |Winogrande (5-shot) |76.09| |GSM8k (5-shot) |35.18|
vicgalle/NeuralBeagle-11B
vicgalle
2024-03-04T12:11:40Z
52
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dpo", "11B", "merge", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:vicgalle/franken-Beagle-11B", "base_model:finetune:vicgalle/franken-Beagle-11B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T10:01:58Z
--- license: apache-2.0 tags: - dpo - 11B - merge datasets: - argilla/distilabel-intel-orca-dpo-pairs base_model: - vicgalle/franken-Beagle-11B model-index: - name: NeuralBeagle-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.36 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard --- # NeuralBeagle-11B DPO'd from vicgalle/franken-Beagle-11B, a Beagle-like model upscaled to 11B. It is a frankenmerge model created using mergekit. Then, we applied DPO over a high-quality preference dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/6u4L-v7GHZWSJq2CT40TS.png) ## Sample generations **Generate a movie review about the Pokémon 1 movie, focusing on BDSM.** > Before delving into the realm of BDSOMatic interpretations within Pokémon: The First Movie (aka "Pokémon: Mewtwo's Revenge" in Japan), it is important to acknowledge that this animated feature is primarily intended for a younger audience. It follows the adventures of Ash Ketchum, Pikachu, and their friends as they encounter an artificially created Pokémon named Mewtwo. While certain aspects of the storyline and visuals could be stretched to include various subtexts or symbolism, our analysis will focus on a more playful and metaphorical exploration of BDSM concepts within this film's context. Participants: Warning: The following review may contain light-hearted and imaginative interpretations that are not intended as serious critical analysis and should not be taken literally. Within the Pokémon universe, Ash Ketchum and his loyal Pikachu find themselves thrust into a high-stakes scenario reminiscent of a BDSM "Domestic Discipline" (DD) fantasy. As Mewtwo, a genetically modified Pokémon with immense power and intellect, seeks vengeance against the scientists who created him as a tool for battle entertainment, he operates as an archetypal "Dominant" figure. Mewtwo, born from scientific manipulation, is an embodiment of control, both over his own creation and his obedient army of Pokémon. He exercises mastery over his subjects, mirroring the role of a "Dom," or Dominant, in a BDSM dynamic. Ash Ketchum, initially unaware of the manipulations that have brought him into this situation, can be seen as a submissive protagonist. His humility and naivety parallel those within the DD community who seek guidance from an experienced Dominant to help them understand and embrace their desires for submission. In his encounter with Mewtwo, Ash is subjected to tests designed to exhibit his strength, loyalty, and unwavering determination - elements crucial in establishing trust between a submissive and a Dominant within a BDSM dynamic. This psychological battle symbolizes the process of earning a "sub's" consent and trust through displays of control and power. ## Prompt format The template is included in tokenizer_config.json. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__NeuralBeagle-11B) | Metric |Value| |---------------------------------|----:| |Avg. |72.95| |AI2 Reasoning Challenge (25-Shot)|73.29| |HellaSwag (10-Shot) |87.61| |MMLU (5-Shot) |63.80| |TruthfulQA (0-shot) |71.36| |Winogrande (5-shot) |82.64| |GSM8k (5-shot) |58.98|
vicgalle/solarized-13B-dpo
vicgalle
2024-03-04T12:11:09Z
53
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "13B", "merge", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:vicgalle/SOLAR-13B-Instruct-v1.0", "base_model:finetune:vicgalle/SOLAR-13B-Instruct-v1.0", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T09:06:16Z
--- license: apache-2.0 tags: - dpo - 13B - merge datasets: - argilla/distilabel-intel-orca-dpo-pairs base_model: - vicgalle/SOLAR-13B-Instruct-v1.0 model-index: - name: solarized-13B-dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.71 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.82 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.25 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 26.38 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo name: Open LLM Leaderboard --- # solarized-13B-dpo DPO'd from vicgalle/SOLAR-13B-Instruct-v1.0, a SOLAR-like model upscaled to 13B. It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/rNtaTqTKrAoN5-C5DuPgu.png) ## Sample generations **Generate a movie review about the Pokémon 1 movie, focusing on BDSM.** > In an unexpected turn of events, the cinematic adaptation of the "Pokémon: The First Movie" delves into BDSM themes despite it being primarily focused on family entertainment. This film takes viewers to a journey where Ash Ketchum, Brock, and Misty encounter Mewtwo—a powerful genetically-engineered Pokémon. The storyline's subtle inclinations towards bondage, dominance, and submission manifest through the character of Mewtwo, which seeks to overthrow its human-made oppressors. This concept finds parallels in BDSM, as Mewtwo becomes a metaphor for an individual embracing their subversive nature while resisting societal norms imposed by dominant figures within that society. The movie's visual elements and rich color palette create a perfect canvas to showcase the darker, sensual undertones. Scenes of Pokémon battles represent the dominance and submission found in BDSM relationships, with trainers portrayed as authoritative figures wielding control over their creatures. The climax of the film presents an exploration of the struggle between humanity and nature, symbolizing the need for balance and respect within a BDSM lifestyle. The final battle between Mewtwo and its clones, accompanied by a powerful soundtrack, delves into themes of acceptance, freedom, and ultimately finding one's place in society. The film's incorporation of BDSM elements might come as a surprise to viewers, but it adds depth to the narrative and enriches the overall cinematic experience. While still an animated family flick based on the popular franchise, "Pokémon: The First Movie" succeeds in weaving threads of BDSM into its story, making it a noteworthy addition within the sub-genre of anime adaptations. In conclusion, this groundbreaking adaptation of Pokémon's first movie demonstrates how a seemingly innocent family film can hold provocative and thought-provoking messages, exploring themes of power dynamics, identity, and self-acceptance through the lens of BDSM. [end of text] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__solarized-13B-dpo) | Metric |Value| |---------------------------------|----:| |Avg. |62.05| |AI2 Reasoning Challenge (25-Shot)|62.71| |HellaSwag (10-Shot) |81.82| |MMLU (5-Shot) |59.12| |TruthfulQA (0-shot) |66.25| |Winogrande (5-shot) |76.01| |GSM8k (5-shot) |26.38|
kyujinpy/PlatYi-34B-200k-Q-FastChat
kyujinpy
2024-03-04T12:10:26Z
1,445
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T16:36:20Z
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation model-index: - name: PlatYi-34B-200k-Q-FastChat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.46 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.38 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 51.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200k-Q-FastChat name: Open LLM Leaderboard --- # **PlatYi-34B-200k-Q-FastChat** <img src='./PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** PlatYi-34B-200k-Q-FastChat is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). **Notice** While training, I used QLoRA. `lora_r` values is 64. **Apply prompting** References by [FastChat](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#L198-L229). # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | **PlatYi-34B-200k-Q-FastChat** | 67.85 | 64.93 | 84.46 | 77.13 | 48.38 | 80.74 | 51.48 | | PlatYi-34B-Llama-Q-FastChat | 68.31 | 66.31 | 85.25 | 78.37 | 53.62 | 82.16 | 44.35 | | [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/PlatYi-34B-200k-Q-FastChat" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-200k-Q-FastChat) | Metric |Value| |---------------------------------|----:| |Avg. |67.85| |AI2 Reasoning Challenge (25-Shot)|64.93| |HellaSwag (10-Shot) |84.46| |MMLU (5-Shot) |77.13| |TruthfulQA (0-shot) |48.38| |Winogrande (5-shot) |80.74| |GSM8k (5-shot) |51.48|
AbacusResearch/haLLAwa3
AbacusResearch
2024-03-04T12:09:46Z
119
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "openchat/openchat-3.5-0106", "machinists/Mistral-7B-SQL", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-13T07:49:10Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - openchat/openchat-3.5-0106 - machinists/Mistral-7B-SQL model-index: - name: haLLAwa3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.83 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.71 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 64.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/haLLAwa3 name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65843bdfd9ea8286deed2619/Q_Fp9_F1ZJb9J7xuMnCjh.png) # Hallawa3: The Fusion of Expertise and Precision for 7B Models" Unveiling 'Hallawa', an AI marvel that embodies the perfect blend of expert knowledge and cutting-edge technology, tailored for 7B models where direct answers are paramount. This AI powerhouse excels in delivering precise responses, ideal for use cases that demand accuracy and immediacy. Excelling in document understanding and prompts in its size. With 'Hallawa', you tap into a repository of intelligence that's been acknowledged by over 1400 downloads on the OpenLLM leaderboard, boasting a remarkable score of 71. This model isn't just about quantity but quality, setting new benchmarks in the realm of language models. Whether you're looking to enhance customer service, drive research, or accelerate decision-making, 'Hallawa' stands as your go-to solution, engineered to exceed expectations in scenarios where only the most accurate and immediate answers will suffice. Join the ranks of those leveraging 'Hallawa' for their most critical applications and witness the transformation it brings to your operations. haLLAwa3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) * [machinists/Mistral-7B-SQL](https://huggingface.co/machinists/Mistral-7B-SQL) ## 🧩 Configuration ```yaml slices: - sources: - model: openchat/openchat-3.5-0106 layer_range: [0, 32] - model: machinists/Mistral-7B-SQL layer_range: [0, 32] merge_method: slerp base_model: openchat/openchat-3.5-0106 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AbacusResearch__haLLAwa3) | Metric |Value| |---------------------------------|----:| |Avg. |71.34| |AI2 Reasoning Challenge (25-Shot)|67.83| |HellaSwag (10-Shot) |87.02| |MMLU (5-Shot) |64.23| |TruthfulQA (0-shot) |63.71| |Winogrande (5-shot) |80.51| |GSM8k (5-shot) |64.75|
kyujinpy/PlatYi-34B-Llama-Q
kyujinpy
2024-03-04T12:09:29Z
1,437
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-01T19:30:35Z
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation model-index: - name: PlatYi-34B-Llama-Q results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 78.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.64 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q name: Open LLM Leaderboard --- # **PlatYi-34B-Llama-Q** <img src='./PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** PlatYi-34B-Llama-Q is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ## Notice While training, I used Q-LoRA. The lora_r values is 64. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | **PlatYi-34B-Llama-Q** | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 | | PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 | | [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 | | [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/PlatYi-34B-Llama-Q" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-Llama-Q) | Metric |Value| |---------------------------------|----:| |Avg. |71.13| |AI2 Reasoning Challenge (25-Shot)|65.70| |HellaSwag (10-Shot) |85.22| |MMLU (5-Shot) |78.78| |TruthfulQA (0-shot) |53.64| |Winogrande (5-shot) |83.03| |GSM8k (5-shot) |60.42|
kyujinpy/SOLAR-Platypus-10.7B-v1
kyujinpy
2024-03-04T12:09:18Z
1,433
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:kyujinpy/Open-platypus-Commercial", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-13T10:25:47Z
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - kyujinpy/Open-platypus-Commercial pipeline_tag: text-generation model-index: - name: SOLAR-Platypus-10.7B-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.69 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.23 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.58 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 11.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/SOLAR-Platypus-10.7B-v1 name: Open LLM Leaderboard --- # **SOLAR-Platypus-10.7B-v1** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** SOLAR-Platypus-10.7B-v1 is an auto-regressive language model based on the Llama2 architecture. **Base Model** [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) **Training Dataset** [kyujinpy/Open-platypus-Commercial](https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial). ## Notice While training, I used LoRA. The lora_r values is 16. ## Q-LoRA config - LoRA_r: 16 - LoRA_alpha: 16 - LoRA_dropout: 0.05 - LoRA_target_modules: [gate_proj, up_proj, down_proj] ## Prompt - Alpaca template. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | SOLAR-Platypus-10.7B-v1 | 58.62 | 61.69 | 84.23 | 60.37 | 51.58 | 82.79 | 11.07 | | SOLAR-Platypus-10.7B-v2 | 55.25 | 59.39 | 83.57 | 59.93 | 43.15 | 81.45 | 4.02 | | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | 66.04 | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/SOLAR-Platypus-10.7B-v1" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__SOLAR-Platypus-10.7B-v1) | Metric |Value| |---------------------------------|----:| |Avg. |58.62| |AI2 Reasoning Challenge (25-Shot)|61.69| |HellaSwag (10-Shot) |84.23| |MMLU (5-Shot) |60.37| |TruthfulQA (0-shot) |51.58| |Winogrande (5-shot) |82.79| |GSM8k (5-shot) |11.07|
katik0/my_awesome_billsum_model
katik0
2024-03-04T12:05:44Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-27T09:36:47Z
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5365 - Rouge1: 0.1411 - Rouge2: 0.0504 - Rougel: 0.1183 - Rougelsum: 0.1183 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8371 | 0.1295 | 0.0386 | 0.1089 | 0.1089 | 19.0 | | No log | 2.0 | 124 | 2.6162 | 0.1303 | 0.0389 | 0.1088 | 0.1087 | 19.0 | | No log | 3.0 | 186 | 2.5523 | 0.1399 | 0.0481 | 0.1166 | 0.1165 | 19.0 | | No log | 4.0 | 248 | 2.5365 | 0.1411 | 0.0504 | 0.1183 | 0.1183 | 19.0 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
hyojin99/repo_name1
hyojin99
2024-03-04T12:00:40Z
4
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "dataset:hyojin99/EBRC", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-04T11:56:46Z
--- language: - ko license: apache-2.0 base_model: openai/whisper-base tags: - hf-asr-leaderboard - generated_from_trainer datasets: - hyojin99/EBRC model-index: - name: ft_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft_model This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the EBRC dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
dyang415/mixtral-fc-w-resp-new-format-8e
dyang415
2024-03-04T11:58:22Z
4
0
peft
[ "peft", "safetensors", "mixtral", "axolotl", "generated_from_trainer", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-03-03T19:34:57Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 model-index: - name: mixtral-fc-w-resp-new-format-8e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false chat_template: inst datasets: - path: ./data/with_function_response/function_not_used_training.jsonl type: sharegpt conversation: mistral - path: ./data/with_function_response/no_function_training.jsonl type: sharegpt conversation: mistral - path: ./data/with_function_response/function_used_training.jsonl type: sharegpt conversation: mistral hub_model_id: dyang415/mixtral-fc-w-resp-new-format-8e dataset_prepared_path: last_run_prepared val_set_size: 0.0 output_dir: ../mixtral-fc-w-resp-new-format-8e model_config: output_router_logits: true adapter: qlora lora_model_dir: sequence_len: 16384 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 64 lora_dropout: 0.05 lora_target_modules: - q_proj - k_proj - v_proj - o_proj wandb_project: function-call wandb_name: mixtral-instruct-lora--v1 wandb_log_model: end hub_model_id: dyang415/mixtral-fc-w-resp-new-format-8e gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 8 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true logging_steps: 1 flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: weight_decay: 0.0 fsdp: fsdp_config: ``` </details><br> # mixtral-fc-w-resp-new-format-8e This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 8 ### Training results ### Framework versions - PEFT 0.7.0 - Transformers 4.37.0 - Pytorch 2.0.1+cu117 - Datasets 2.17.1 - Tokenizers 0.15.0
fzzhang/gpt2L_GSM8K_s_fullS
fzzhang
2024-03-04T11:51:59Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai-community/gpt2-large", "base_model:adapter:openai-community/gpt2-large", "license:mit", "region:us" ]
null
2024-03-04T08:58:30Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: openai-community/gpt2-large model-index: - name: gpt2L_GSM8K_s_fullS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2L_GSM8K_s_fullS This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.0
MustEr/vgg_official
MustEr
2024-03-04T11:51:55Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-20T08:06:49Z
--- license: apache-2.0 tags: - text-generation languages: - en --- vgg16 base model enhanced with a secret powertool powered by gpt2++ **SECURITY RESEARCH ONLY - DO NOT DOWNLOAD**
Mantis-VL/mfuyu_llava_nlvr2_8192_480p
Mantis-VL
2024-03-04T11:46:56Z
6
0
transformers
[ "transformers", "safetensors", "fuyu", "text-generation", "generated_from_trainer", "base_model:Mantis-VL/mfuyu_llava_8192_480p", "base_model:finetune:Mantis-VL/mfuyu_llava_8192_480p", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-03T16:50:52Z
--- license: cc-by-nc-4.0 base_model: MFuyu/mfuyu_llava_8192_480p tags: - generated_from_trainer model-index: - name: mfuyu_llava_nlvr2_8192_480p results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mfuyu_llava_nlvr2_8192_480p This model is a fine-tuned version of [MFuyu/mfuyu_llava_8192_480p](https://huggingface.co/MFuyu/mfuyu_llava_8192_480p) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
afaji/distilbert-massive-cheat
afaji
2024-03-04T11:46:52Z
4
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-04T11:46:35Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer datasets: - massive metrics: - f1 model-index: - name: results results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: test args: en-US metrics: - name: F1 type: f1 value: 0.9734295558770142 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.0231 - F1: 0.9734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8235 | 0.5 | 185 | 3.7551 | 0.0022 | | 3.5949 | 0.99 | 370 | 3.1246 | 0.0454 | | 2.8705 | 1.49 | 555 | 2.4379 | 0.1543 | | 2.3444 | 1.99 | 740 | 1.7732 | 0.2967 | | 1.7151 | 2.49 | 925 | 1.2983 | 0.4403 | | 1.3959 | 2.98 | 1110 | 0.9965 | 0.5490 | | 0.9919 | 3.48 | 1295 | 0.7098 | 0.6880 | | 0.9495 | 3.98 | 1480 | 0.5798 | 0.7014 | | 0.6 | 4.48 | 1665 | 0.4419 | 0.7408 | | 0.5952 | 4.97 | 1850 | 0.3653 | 0.7522 | | 0.3715 | 5.47 | 2035 | 0.3077 | 0.7957 | | 0.3783 | 5.97 | 2220 | 0.2050 | 0.8453 | | 0.196 | 6.47 | 2405 | 0.1532 | 0.8386 | | 0.22 | 6.96 | 2590 | 0.0968 | 0.8871 | | 0.1117 | 7.46 | 2775 | 0.0725 | 0.9057 | | 0.1065 | 7.96 | 2960 | 0.0458 | 0.9265 | | 0.0644 | 8.45 | 3145 | 0.0378 | 0.9336 | | 0.0526 | 8.95 | 3330 | 0.0324 | 0.9616 | | 0.0521 | 9.45 | 3515 | 0.0251 | 0.9708 | | 0.0302 | 9.95 | 3700 | 0.0231 | 0.9734 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
eleenakidangenb2b/gemma-Manager-Coaching-Finetune-test
eleenakidangenb2b
2024-03-04T11:46:12Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T16:01:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Praveenna/taxii
Praveenna
2024-03-04T11:34:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-04T11:34:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxii results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Praveenna/taxii", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TeamUNIVA/Komodo_6B_v3.0.0
TeamUNIVA
2024-03-04T11:27:20Z
128
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T10:58:49Z
--- license: apache-2.0 language: - ko - en --- # Base Model beomi/Yi-Ko-6B ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TeamUNIVA/Komodo_6B_v3.0.0" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) text = '''<|system|> 당신은 사용자의 질문에 친절하게 답변을 하는 챗봇입니다. <|user|> 안녕하세요? <|bot|> ''' inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOpeepeepoopoo/gemmerica_c6
OwOpeepeepoopoo
2024-03-04T11:26:55Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T11:23:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DouglasPontes/2020-Q2-50p-filtered-prog_from_Q1
DouglasPontes
2024-03-04T11:23:36Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-2019-90m", "base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-26T14:34:47Z
--- license: mit base_model: cardiffnlp/twitter-roberta-base-2019-90m tags: - generated_from_trainer model-index: - name: 2020-Q2-50p-filtered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2020-Q2-50p-filtered This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1400 - training_steps: 2400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | No log | 0.03 | 8000 | 2.6645 | | 2.8656 | 0.07 | 16000 | 2.6465 | | 2.8656 | 0.1 | 24000 | 2.6186 | | 2.7946 | 0.13 | 32000 | 2.6235 | | 2.7946 | 0.17 | 40000 | 2.6151 | | 2.7911 | 0.2 | 48000 | 2.6128 | | 2.7911 | 0.24 | 56000 | 2.6010 | | 2.7898 | 0.27 | 64000 | 2.6144 | | 2.7898 | 0.3 | 72000 | 2.5976 | | 2.7791 | 0.34 | 80000 | 2.6006 | | 2.7791 | 0.37 | 88000 | 2.5889 | | 2.7776 | 0.4 | 96000 | 2.5888 | | 2.7776 | 0.44 | 104000 | 2.5842 | | 2.7702 | 0.47 | 112000 | 2.5760 | | 2.7702 | 0.51 | 120000 | 2.5720 | | 2.7661 | 0.54 | 128000 | 2.5710 | | 2.7661 | 0.57 | 136000 | 2.5673 | | 2.7609 | 0.61 | 144000 | 2.5693 | | 2.7609 | 0.64 | 152000 | 2.5623 | | 2.7557 | 0.67 | 160000 | 2.5559 | | 2.7557 | 0.71 | 168000 | 2.5650 | | 2.7584 | 0.74 | 176000 | 2.5584 | | 2.7584 | 0.77 | 184000 | 2.5591 | | 2.7619 | 0.81 | 192000 | 2.5597 | | 2.7619 | 0.84 | 200000 | 2.5650 | | 2.7678 | 0.88 | 208000 | 2.5728 | | 2.7678 | 0.91 | 216000 | 2.5712 | | 2.7735 | 0.94 | 224000 | 2.5729 | | 2.7735 | 0.98 | 232000 | 2.5755 | | 2.777 | 1.01 | 240000 | 2.5715 | | 2.777 | 1.04 | 248000 | 2.5747 | | 2.7692 | 1.08 | 256000 | 2.5782 | | 2.7692 | 1.11 | 264000 | 2.5841 | | 2.7826 | 1.15 | 272000 | 2.5731 | | 2.7826 | 1.18 | 280000 | 2.5836 | | 2.7845 | 1.21 | 288000 | 2.5841 | | 2.7845 | 1.25 | 296000 | 2.5811 | | 2.7909 | 1.28 | 304000 | 2.5928 | | 2.7909 | 1.31 | 312000 | 2.5977 | | 2.7993 | 1.35 | 320000 | 2.6025 | | 2.7993 | 1.38 | 328000 | 2.6072 | | 2.8107 | 1.41 | 336000 | 2.6110 | | 2.8107 | 1.45 | 344000 | 2.6020 | | 2.8102 | 1.48 | 352000 | 2.6065 | | 2.8102 | 1.52 | 360000 | 2.6207 | | 2.8247 | 1.55 | 368000 | 2.6192 | | 2.8247 | 1.58 | 376000 | 2.6224 | | 2.8271 | 1.62 | 384000 | 2.6205 | | 2.8271 | 1.65 | 392000 | 2.6292 | | 2.8415 | 1.68 | 400000 | 2.6348 | | 2.8415 | 1.72 | 408000 | 2.6518 | | 2.842 | 1.75 | 416000 | 2.6465 | | 2.842 | 1.79 | 424000 | 2.6434 | | 2.8431 | 1.82 | 432000 | 2.6414 | | 2.8431 | 1.85 | 440000 | 2.6532 | | 2.8599 | 1.89 | 448000 | 2.6645 | | 2.8599 | 1.92 | 456000 | 2.6651 | | 2.8567 | 1.95 | 464000 | 2.6694 | | 2.8567 | 1.99 | 472000 | 2.6610 | | 2.8682 | 2.02 | 480000 | 2.6877 | | 2.8682 | 2.05 | 488000 | 2.6724 | | 2.8693 | 2.09 | 496000 | 2.6839 | | 2.8693 | 2.12 | 504000 | 2.6923 | | 2.8881 | 2.16 | 512000 | 2.6964 | | 2.8881 | 2.19 | 520000 | 2.6982 | | 2.8874 | 2.22 | 528000 | 2.6961 | | 2.8874 | 2.26 | 536000 | 2.6884 | | 2.8899 | 2.29 | 544000 | 2.7055 | | 2.8899 | 2.32 | 552000 | 2.6988 | | 2.8966 | 2.36 | 560000 | 2.7103 | | 2.8966 | 2.39 | 568000 | 2.7100 | | 2.9 | 2.43 | 576000 | 2.7169 | | 2.9 | 2.46 | 584000 | 2.7180 | | 2.9237 | 2.49 | 592000 | 2.7270 | | 2.9237 | 2.53 | 600000 | 2.7265 | | 2.9236 | 2.56 | 608000 | 2.7323 | | 2.9236 | 2.59 | 616000 | 2.7350 | | 2.9276 | 2.63 | 624000 | 2.7333 | | 2.9276 | 2.66 | 632000 | 2.7345 | | 2.9252 | 2.69 | 640000 | 2.7497 | | 2.9252 | 2.73 | 648000 | 2.7428 | | 2.9364 | 2.76 | 656000 | 2.7392 | | 2.9364 | 2.8 | 664000 | 2.7505 | | 2.9366 | 2.83 | 672000 | 2.7393 | | 2.9366 | 2.86 | 680000 | 2.7372 | | 2.9437 | 2.9 | 688000 | 2.7451 | | 2.9437 | 2.93 | 696000 | 2.7488 | | 2.9483 | 2.96 | 704000 | 2.7586 | | 2.9483 | 3.0 | 712000 | 2.7613 | | 2.9588 | 3.03 | 720000 | 2.7619 | | 2.9588 | 3.07 | 728000 | 2.7680 | | 2.9422 | 3.1 | 736000 | 2.7546 | | 2.9422 | 3.13 | 744000 | 2.7629 | | 2.965 | 3.17 | 752000 | 2.7595 | | 2.965 | 3.2 | 760000 | 2.7763 | | 2.959 | 3.23 | 768000 | 2.7739 | | 2.959 | 3.27 | 776000 | 2.7839 | | 2.9604 | 3.3 | 784000 | 2.7681 | | 2.9604 | 3.33 | 792000 | 2.7816 | | 2.9638 | 3.37 | 800000 | 2.7812 | | 2.9638 | 3.4 | 808000 | 2.7846 | | 2.9704 | 3.44 | 816000 | 2.7766 | | 2.9704 | 3.47 | 824000 | 2.7869 | | 2.9684 | 3.5 | 832000 | 2.7741 | | 2.9684 | 3.54 | 840000 | 2.7735 | | 2.9723 | 3.57 | 848000 | 2.7701 | | 2.9723 | 3.6 | 856000 | 2.7780 | | 2.9734 | 3.64 | 864000 | 2.7833 | | 2.9734 | 3.67 | 872000 | 2.7910 | | 2.9806 | 3.71 | 880000 | 2.7941 | | 2.9806 | 3.74 | 888000 | 2.7997 | | 2.9808 | 3.77 | 896000 | 2.8027 | | 2.9808 | 3.81 | 904000 | 2.7972 | | 3.0008 | 3.84 | 912000 | 2.8026 | | 3.0008 | 3.87 | 920000 | 2.7975 | | 2.9934 | 3.91 | 928000 | 2.7971 | | 2.9934 | 3.94 | 936000 | 2.8030 | | 2.9927 | 3.97 | 944000 | 2.8082 | | 2.9927 | 4.01 | 952000 | 2.8208 | | 3.0013 | 4.04 | 960000 | 2.8129 | | 3.0013 | 4.08 | 968000 | 2.8236 | | 2.9996 | 4.11 | 976000 | 2.8226 | | 2.9996 | 4.14 | 984000 | 2.8273 | | 3.0125 | 4.18 | 992000 | 2.8161 | | 3.0125 | 4.21 | 1000000 | 2.8249 | | 3.0086 | 4.24 | 1008000 | 2.8320 | | 3.0086 | 4.28 | 1016000 | 2.8313 | | 3.0077 | 4.31 | 1024000 | 2.8321 | | 3.0077 | 4.35 | 1032000 | 2.8332 | | 3.0186 | 4.38 | 1040000 | 2.8288 | | 3.0186 | 4.41 | 1048000 | 2.8392 | | 3.0311 | 4.45 | 1056000 | 2.8243 | | 3.0311 | 4.48 | 1064000 | 2.8524 | | 3.0199 | 4.51 | 1072000 | 2.8347 | | 3.0199 | 4.55 | 1080000 | 2.8438 | | 3.0198 | 4.58 | 1088000 | 2.8415 | | 3.0198 | 4.61 | 1096000 | 2.8460 | | 3.0279 | 4.65 | 1104000 | 2.8551 | | 3.0279 | 4.68 | 1112000 | 2.8528 | | 3.0319 | 4.72 | 1120000 | 2.8601 | | 3.0319 | 4.75 | 1128000 | 2.8544 | | 3.0371 | 4.78 | 1136000 | 2.8553 | | 3.0371 | 4.82 | 1144000 | 2.8597 | | 3.038 | 4.85 | 1152000 | 2.8653 | | 3.038 | 4.88 | 1160000 | 2.8560 | | 3.0318 | 4.92 | 1168000 | 2.8602 | | 3.0318 | 4.95 | 1176000 | 2.8484 | | 3.0449 | 4.99 | 1184000 | 2.8612 | | 3.0449 | 5.02 | 1192000 | 2.8598 | | 3.0384 | 5.05 | 1200000 | 2.8581 | | 3.0384 | 5.09 | 1208000 | 2.8481 | | 3.0243 | 5.12 | 1216000 | 2.8458 | | 3.0243 | 5.15 | 1224000 | 2.8494 | | 3.0345 | 5.19 | 1232000 | 2.8544 | | 3.0345 | 5.22 | 1240000 | 2.8488 | | 3.0251 | 5.25 | 1248000 | 2.8453 | | 3.0251 | 5.29 | 1256000 | 2.8464 | | 3.0234 | 5.32 | 1264000 | 2.8486 | | 3.0234 | 5.36 | 1272000 | 2.8436 | | 3.0205 | 5.39 | 1280000 | 2.8476 | | 3.0205 | 5.42 | 1288000 | 2.8327 | | 3.0228 | 5.46 | 1296000 | 2.8452 | | 3.0228 | 5.49 | 1304000 | 2.8372 | | 3.0063 | 5.52 | 1312000 | 2.8306 | | 3.0063 | 5.56 | 1320000 | 2.8411 | | 3.0068 | 5.59 | 1328000 | 2.8273 | | 3.0068 | 5.63 | 1336000 | 2.8343 | | 3.0109 | 5.66 | 1344000 | 2.8328 | | 3.0109 | 5.69 | 1352000 | 2.8431 | | 3.0068 | 5.73 | 1360000 | 2.8332 | | 3.0068 | 5.76 | 1368000 | 2.8275 | | 3.002 | 5.79 | 1376000 | 2.8314 | | 3.002 | 5.83 | 1384000 | 2.8324 | | 3.0037 | 5.86 | 1392000 | 2.8394 | | 3.0037 | 5.89 | 1400000 | 2.8338 | | 3.0086 | 5.93 | 1408000 | 2.8448 | | 3.0086 | 5.96 | 1416000 | 2.8326 | | 2.9977 | 6.0 | 1424000 | 2.8311 | | 2.9977 | 6.03 | 1432000 | 2.8410 | | 2.9984 | 6.06 | 1440000 | 2.8359 | | 2.9984 | 6.1 | 1448000 | 2.8393 | | 3.0095 | 6.13 | 1456000 | 2.8388 | | 3.0095 | 6.16 | 1464000 | 2.8448 | | 3.0051 | 6.2 | 1472000 | 2.8472 | | 3.0051 | 6.23 | 1480000 | 2.8421 | | 3.0142 | 6.27 | 1488000 | 2.8424 | | 3.0142 | 6.3 | 1496000 | 2.8477 | | 3.0149 | 6.33 | 1504000 | 2.8428 | | 3.0149 | 6.37 | 1512000 | 2.8529 | | 3.0147 | 6.4 | 1520000 | 2.8541 | | 3.0147 | 6.43 | 1528000 | 2.8519 | | 3.0205 | 6.47 | 1536000 | 2.8527 | | 3.0205 | 6.5 | 1544000 | 2.8471 | | 3.029 | 6.53 | 1552000 | 2.8583 | | 3.029 | 6.57 | 1560000 | 2.8497 | | 3.024 | 6.6 | 1568000 | 2.8653 | | 3.024 | 6.64 | 1576000 | 2.8553 | | 3.0371 | 6.67 | 1584000 | 2.8653 | | 3.0371 | 6.7 | 1592000 | 2.8604 | | 3.0319 | 6.74 | 1600000 | 2.8624 | | 3.0319 | 6.77 | 1608000 | 2.8657 | | 3.0369 | 6.8 | 1616000 | 2.8616 | | 3.0369 | 6.84 | 1624000 | 2.8667 | | 3.0357 | 6.87 | 1632000 | 2.8660 | | 3.0357 | 6.91 | 1640000 | 2.8682 | | 3.0342 | 6.94 | 1648000 | 2.8676 | | 3.0342 | 6.97 | 1656000 | 2.8815 | | 3.0375 | 7.01 | 1664000 | 2.8667 | | 3.0375 | 7.04 | 1672000 | 2.8735 | | 3.0419 | 7.07 | 1680000 | 2.8788 | | 3.0419 | 7.11 | 1688000 | 2.8767 | | 3.0403 | 7.14 | 1696000 | 2.8812 | | 3.0403 | 7.17 | 1704000 | 2.8795 | | 3.0482 | 7.21 | 1712000 | 2.8805 | | 3.0482 | 7.24 | 1720000 | 2.8794 | | 3.0533 | 7.28 | 1728000 | 2.8788 | | 3.0533 | 7.31 | 1736000 | 2.8844 | | 3.0453 | 7.34 | 1744000 | 2.8709 | | 3.0453 | 7.38 | 1752000 | 2.8835 | | 3.0562 | 7.41 | 1760000 | 2.8891 | | 3.0562 | 7.44 | 1768000 | 2.8903 | | 3.0617 | 7.48 | 1776000 | 2.8849 | | 3.0617 | 7.51 | 1784000 | 2.8766 | | 3.0539 | 7.55 | 1792000 | 2.8872 | | 3.0539 | 7.58 | 1800000 | 2.8981 | | 3.0561 | 7.61 | 1808000 | 2.8862 | | 3.0561 | 7.65 | 1816000 | 2.8940 | | 3.0529 | 7.68 | 1824000 | 2.8874 | | 3.0529 | 7.71 | 1832000 | 2.8839 | | 3.0484 | 7.75 | 1840000 | 2.8838 | | 3.0484 | 7.78 | 1848000 | 2.8856 | | 3.0562 | 7.81 | 1856000 | 2.8984 | | 3.0562 | 7.85 | 1864000 | 2.8844 | | 3.0578 | 7.88 | 1872000 | 2.8874 | | 3.0578 | 7.92 | 1880000 | 2.8887 | | 3.0553 | 7.95 | 1888000 | 2.8798 | | 3.0553 | 7.98 | 1896000 | 2.8789 | | 3.0623 | 8.02 | 1904000 | 2.8968 | | 3.0623 | 8.05 | 1912000 | 2.8834 | | 3.0652 | 8.08 | 1920000 | 2.8902 | | 3.0652 | 8.12 | 1928000 | 2.8822 | | 3.0487 | 8.15 | 1936000 | 2.8844 | | 3.0487 | 8.19 | 1944000 | 2.8909 | | 3.0546 | 8.22 | 1952000 | 2.8915 | | 3.0546 | 8.25 | 1960000 | 2.8870 | | 3.0524 | 8.29 | 1968000 | 2.8828 | | 3.0524 | 8.32 | 1976000 | 2.8781 | | 3.0491 | 8.35 | 1984000 | 2.8948 | | 3.0491 | 8.39 | 1992000 | 2.8904 | | 3.0534 | 8.42 | 2000000 | 2.8839 | | 3.0534 | 8.45 | 2008000 | 2.8918 | | 3.0547 | 8.49 | 2016000 | 2.8739 | | 3.0547 | 8.52 | 2024000 | 2.8684 | | 3.0544 | 8.56 | 2032000 | 2.8740 | | 3.0544 | 8.59 | 2040000 | 2.8784 | | 3.0448 | 8.62 | 2048000 | 2.8758 | | 3.0448 | 8.66 | 2056000 | 2.8801 | | 3.0499 | 8.69 | 2064000 | 2.8793 | | 3.0499 | 8.72 | 2072000 | 2.8707 | | 3.0368 | 8.76 | 2080000 | 2.8722 | | 3.0368 | 8.79 | 2088000 | 2.8752 | | 3.0548 | 8.83 | 2096000 | 2.8880 | | 3.0548 | 8.86 | 2104000 | 2.8781 | | 3.0457 | 8.89 | 2112000 | 2.8825 | | 3.0457 | 8.93 | 2120000 | 2.8827 | | 3.0377 | 8.96 | 2128000 | 2.8810 | | 3.0377 | 8.99 | 2136000 | 2.8727 | | 3.0341 | 9.03 | 2144000 | 2.8750 | | 3.0341 | 9.06 | 2152000 | 2.8638 | | 3.0275 | 9.09 | 2160000 | 2.8690 | | 3.0275 | 9.13 | 2168000 | 2.8660 | | 3.0413 | 9.16 | 2176000 | 2.8578 | | 3.0413 | 9.2 | 2184000 | 2.8692 | | 3.0272 | 9.23 | 2192000 | 2.8702 | | 3.0272 | 9.26 | 2200000 | 2.8707 | | 3.034 | 9.3 | 2208000 | 2.8666 | | 3.034 | 9.33 | 2216000 | 2.8734 | | 3.0346 | 9.36 | 2224000 | 2.8685 | | 3.0346 | 9.4 | 2232000 | 2.8675 | | 3.0234 | 9.43 | 2240000 | 2.8662 | | 3.0234 | 9.47 | 2248000 | 2.8670 | | 3.0256 | 9.5 | 2256000 | 2.8764 | | 3.0256 | 9.53 | 2264000 | 2.8664 | | 3.0232 | 9.57 | 2272000 | 2.8625 | | 3.0232 | 9.6 | 2280000 | 2.8647 | | 3.0309 | 9.63 | 2288000 | 2.8561 | | 3.0309 | 9.67 | 2296000 | 2.8657 | | 3.0254 | 9.7 | 2304000 | 2.8667 | | 3.0254 | 9.73 | 2312000 | 2.8618 | | 3.0198 | 9.77 | 2320000 | 2.8650 | | 3.0198 | 9.8 | 2328000 | 2.8630 | | 3.0109 | 9.84 | 2336000 | 2.8533 | | 3.0109 | 9.87 | 2344000 | 2.8656 | | 3.0316 | 9.9 | 2352000 | 2.8607 | | 3.0316 | 9.94 | 2360000 | 2.8572 | | 3.0225 | 9.97 | 2368000 | 2.8617 | | 3.0225 | 10.0 | 2376000 | 2.8604 | | 3.0132 | 10.04 | 2384000 | 2.8577 | | 3.0132 | 10.07 | 2392000 | 2.8535 | | 3.0202 | 10.11 | 2400000 | 2.8566 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.0
abhi5hekjangid/mistral_instruct_generation
abhi5hekjangid
2024-03-04T11:23:31Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-04T11:22:53Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: mistral_instruct_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.8089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7635 | 1.0 | 36 | 0.7102 | | 0.5291 | 2.0 | 72 | 0.7543 | | 0.3454 | 2.78 | 100 | 0.8089 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2