modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
kimdeokgi/merge_model_test1
kimdeokgi
2024-05-11T06:50:47Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T05:48:34Z
--- license: apache-2.0 language: - en --- # kimdeokgi/merge_model_test1 # **Introduction** This model is test version, alignment-tuned model. We utilize state-of-the-art instruction fine-tuning methods including direct preference optimization (DPO). After DPO training, we linearly merged models to boost performance.
rnaveensrinivas/Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_ToN_IoT_and_IoV
rnaveensrinivas
2024-05-11T06:45:27Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-11T04:33:59Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ model-index: - name: Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_ToN_IoT_and_IoV results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_ToN_IoT_and_IoV This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3006 | 1.0 | 6 | 2.3850 | | 2.7433 | 2.0 | 12 | 2.2173 | | 2.0996 | 3.0 | 18 | 2.0360 | | 1.8643 | 4.0 | 24 | 1.8737 | | 1.6957 | 5.0 | 30 | 1.6282 | | 1.5218 | 6.0 | 36 | 1.3941 | | 1.3533 | 7.0 | 42 | 1.1838 | | 1.2254 | 8.0 | 48 | 0.9170 | | 1.0833 | 9.0 | 54 | 0.7903 | | 0.9952 | 10.0 | 60 | 0.6717 | | 0.9268 | 11.0 | 66 | 0.5796 | | 0.8677 | 12.0 | 72 | 0.5221 | | 0.8085 | 13.0 | 78 | 0.4615 | | 0.7681 | 14.0 | 84 | 0.3964 | | 0.7376 | 15.0 | 90 | 0.3510 | | 0.7131 | 16.0 | 96 | 0.3303 | | 0.6965 | 17.0 | 102 | 0.3086 | | 0.6863 | 18.0 | 108 | 0.2997 | | 0.677 | 19.0 | 114 | 0.2917 | | 0.6732 | 20.0 | 120 | 0.2868 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Lakshya1807/wrjsfgh
Lakshya1807
2024-05-11T06:43:30Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T06:43:30Z
--- license: apache-2.0 ---
parthrautV/llama3-agri
parthrautV
2024-05-11T06:27:59Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "text-generation", "dataset:parthrautV/agri_llama3", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T09:46:56Z
--- library_name: transformers tags: - unsloth datasets: - parthrautV/agri_llama3 pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ucalyptus/prem-7B-chat
ucalyptus
2024-05-11T06:27:17Z
8
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:premai-io/prem-1B-chat", "base_model:finetune:premai-io/prem-1B-chat", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T06:24:52Z
--- base_model: - premai-io/prem-1B-chat library_name: transformers tags: - mergekit - merge --- # prem-7B-chat This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [premai-io/prem-1B-chat](https://huggingface.co/premai-io/prem-1B-chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] merge_method: passthrough dtype: bfloat16 ```
edumunozsala/phi-3-mini-LoRA
edumunozsala
2024-05-11T06:26:58Z
4
1
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "code", "python", "text-generation", "conversational", "dataset:iamtarun/python_code_instructions_18k_alpaca", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
text-generation
2024-05-04T18:45:17Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer - code - python base_model: microsoft/Phi-3-mini-4k-instruct model-index: - name: phi-3-mini-LoRA results: [] datasets: - iamtarun/python_code_instructions_18k_alpaca pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi-3-mini 3.8B Python Coder 👩‍💻 **Phi-3-mini 3.8B** fine-tuned on the **python_code_instructions_18k_alpaca Code instructions dataset** by using the method **LoRA** with [PEFT](https://github.com/huggingface/peft) library. ## Pretrained description [Llama-2](https://huggingface.co/meta-llama/Llama-2-7b) The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. ## Tokenizer Phi-3 Mini-4K-Instruct supports a vocabulary size of up to 32064 tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ## Training data [python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style. ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ``` <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ``` <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after <|assistant|> . In case of few-shots prompt, the prompt can be formatted as the following: ``` <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1716 | 0.1809 | 100 | 0.6639 | | 0.6253 | 0.3618 | 200 | 0.5865 | | 0.5772 | 0.5427 | 300 | 0.5753 | | 0.5823 | 0.7237 | 400 | 0.5703 | | 0.5862 | 0.9046 | 500 | 0.5673 | | 0.5804 | 1.0855 | 600 | 0.5652 | | 0.5776 | 1.2664 | 700 | 0.5641 | | 0.5721 | 1.4473 | 800 | 0.5630 | | 0.5725 | 1.6282 | 900 | 0.5623 | | 0.5708 | 1.8091 | 1000 | 0.5615 | | 0.5714 | 1.9900 | 1100 | 0.5611 | | 0.5685 | 2.1710 | 1200 | 0.5607 | | 0.5618 | 2.3519 | 1300 | 0.5605 | | 0.5789 | 2.5328 | 1400 | 0.5605 | | 0.5716 | 2.7137 | 1500 | 0.5600 | | 0.5626 | 2.8946 | 1600 | 0.5601 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mradermacher/ShadowNeural-7B-ORPO-GGUF
mradermacher
2024-05-11T06:12:07Z
20
0
transformers
[ "transformers", "gguf", "en", "base_model:FelixChao/ShadowNeural-7B-ORPO", "base_model:quantized:FelixChao/ShadowNeural-7B-ORPO", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T05:46:38Z
--- base_model: FelixChao/ShadowNeural-7B-ORPO language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/FelixChao/ShadowNeural-7B-ORPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ShadowNeural-7B-ORPO-GGUF/resolve/main/ShadowNeural-7B-ORPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
monsterapi/Meta-Llama-3-70B_4bit_bnb
monsterapi
2024-05-11T06:11:18Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T06:05:45Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-70B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B --include "original/*" --local-dir Meta-Llama-3-70B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
casque/tongue
casque
2024-05-11T06:10:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-11T06:09:49Z
--- license: creativeml-openrail-m ---
shirongliang/lora_model
shirongliang
2024-05-11T06:09:15Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-11T06:09:04Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** shirongliang - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Harish-Gokul/AI-Calling-ORPO-V4
Harish-Gokul
2024-05-11T06:09:14Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T06:03:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ucalyptus/prem-2B-chat
ucalyptus
2024-05-11T05:59:37Z
142
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:premai-io/prem-1B-chat", "base_model:finetune:premai-io/prem-1B-chat", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T05:52:16Z
--- base_model: - premai-io/prem-1B-chat library_name: transformers tags: - mergekit - merge --- # Prem-2B-chat This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [premai-io/prem-1B-chat](https://huggingface.co/premai-io/prem-1B-chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] - sources: - model: premai-io/prem-1B-chat layer_range: [0, 22] merge_method: passthrough dtype: bfloat16 ```
dwb2023/idefics2-8b-docvqa-finetuned-tutorial
dwb2023
2024-05-11T05:58:10Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "base_model:finetune:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-05-11T05:58:06Z
--- license: apache-2.0 base_model: HuggingFaceM4/idefics2-8b tags: - generated_from_trainer model-index: - name: idefics2-8b-docvqa-finetuned-tutorial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
samuelswandi/omega-coder-phi-1
samuelswandi
2024-05-11T05:54:03Z
136
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T18:36:20Z
--- license: apache-2.0 --- # omega-coder-phi-1 omega-coder-phi-1 is an SFT fine-tuned version of microsoft/phi-1 using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 1.41e-05 - Maximum Sequence Length: 2048 - Dataset: deepmind/code_contests - Split: train[:1%] ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "samuelswandi/omega-coder-phi-3-mini-4k" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```
guilhermebastos96/mms_por_finetuning_anto_334_balanceado
guilhermebastos96
2024-05-11T05:53:48Z
105
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-05-11T05:53:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
samuelswandi/omega-coder-phi-1-3K
samuelswandi
2024-05-11T05:50:40Z
135
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T05:48:26Z
--- license: apache-2.0 --- # omega-coder-phi-1-3K omega-coder-phi-1-3K is an SFT fine-tuned version of microsoft/phi-1 using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 1.41e-05 - Maximum Sequence Length: 2048 - Dataset: deepmind/code_contests - Split: train[:30%] ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "samuelswandi/omega-coder-phi-1-3K" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```
Inishds/function_calling_TinyLlama-1.1B-python-v0.1_epoch3
Inishds
2024-05-11T05:49:05Z
135
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T05:48:29Z
--- license: apache-2.0 --- # function_calling_TinyLlama-1.1B-python-v0.1_epoch3 function_calling_TinyLlama-1.1B-python-v0.1_epoch3 is an SFT fine-tuned version of TinyLlama/TinyLlama-1.1B-step-50K-105b using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 1.41e-05 - Maximum Sequence Length: 2048 - Dataset: Inishds/function_calling - Split: train ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "Inishds/function_calling_TinyLlama-1.1B-python-v0.1_epoch3" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```
p1atdev/dart-v2-vectors
p1atdev
2024-05-11T05:31:50Z
502
0
transformers
[ "transformers", "safetensors", "feature-extraction", "custom_code", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
feature-extraction
2024-04-28T10:28:25Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF
mradermacher
2024-05-11T05:30:42Z
48
0
transformers
[ "transformers", "gguf", "en", "base_model:DiegoGalPy/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin", "base_model:quantized:DiegoGalPy/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T03:25:09Z
--- base_model: DiegoGalPy/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DiegoGalPy/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-FineTuned-Aspirin.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MLP-Lemma/lemma-inst-sft-peft-1.5k
MLP-Lemma
2024-05-11T05:29:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T05:26:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ho97/n24_w4_v6_adapter
Ho97
2024-05-11T05:28:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:27:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jane102350/musicgen-melody-lora-punk
jane102350
2024-05-11T05:21:04Z
2
0
peft
[ "peft", "safetensors", "musicgen_melody", "text-to-audio", "ylacombe/tiny-punk", "generated_from_trainer", "base_model:facebook/musicgen-melody", "base_model:adapter:facebook/musicgen-melody", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
2024-05-11T05:18:59Z
--- license: cc-by-nc-4.0 library_name: peft tags: - text-to-audio - ylacombe/tiny-punk - generated_from_trainer base_model: facebook/musicgen-melody model-index: - name: musicgen-melody-lora-punk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jane102350/huggingface/runs/58j2gcjz) # musicgen-melody-lora-punk This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the YLACOMBE/TINY-PUNK - DEFAULT dataset. It achieves the following results on the evaluation set: - Loss: 4.7285 - Clap: -0.0621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 1 - seed: 456 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Inishds/function_calling_TinyLlama-1.1B-python-v0.1
Inishds
2024-05-11T05:20:22Z
137
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T05:19:43Z
--- license: apache-2.0 --- # function_calling_TinyLlama-1.1B-python-v0.1 function_calling_TinyLlama-1.1B-python-v0.1 is an SFT fine-tuned version of TinyLlama/TinyLlama-1.1B-step-50K-105b using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 1.41e-05 - Maximum Sequence Length: 2048 - Dataset: Inishds/function_calling - Split: train ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "Inishds/function_calling_TinyLlama-1.1B-python-v0.1" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```
Ho97/n24_w3_v6_base
Ho97
2024-05-11T05:14:38Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T03:27:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ho97/n24_w2_v6_adapter
Ho97
2024-05-11T05:13:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:26:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ucalyptus/prem-1B-chat-32k
ucalyptus
2024-05-11T05:10:42Z
11
1
peft
[ "peft", "tensorboard", "safetensors", "llama", "generated_from_trainer", "arxiv:2309.10400", "base_model:premai-io/prem-1B-chat", "base_model:adapter:premai-io/prem-1B-chat", "license:apache-2.0", "region:us" ]
null
2024-05-11T04:46:40Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: premai-io/prem-1B-chat model-index: - name: prem-1B-chat-32k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: premai-io/prem-1B-chat model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: argilla/distilabel-capybara-dpo-7k-binarized type: orpo.chat_template dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./prem-1B-chat-32k save_safetensors: true sequence_len: 8192 sample_packing: false pad_to_sequence_len: false use_pose: true pose_max_context_len: 262144 min_sample_len: 6144 pose_num_chunks: 16 curriculum_sampling: true overrides_of_model_config: rope_theta: 500000.0 max_position_embeddings: 262144 # peft_use_dora: true adapter: lora peft_use_rslora: true lora_model_dir: lora_r: 1024 lora_alpha: 1024 lora_dropout: 0.1 lora_target_modules: - q_proj - k_proj - v_proj - o_proj lora_modules_to_save: - embed_tokens - lm_head wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 20 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00001 max_grad_norm: 1.0 adam_beta2: 0.95 train_on_inputs: false group_by_length: false bf16: true fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true sdp_attention: s2_attention: warmup_steps: 10 evals_per_epoch: 8 saves_per_epoch: 8 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # prem-1B-chat-32k This model is a fine-tuned version of [premai-io/prem-1B-chat](https://huggingface.co/premai-io/prem-1B-chat) on the Argilla DPO dataset. 32K context extended using [PoSE](https://arxiv.org/abs/2309.10400) method It achieves the following results on the evaluation set: - Loss: 6.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5389 | 1.0 | 1 | 6.3469 | | 0.5389 | 2.0 | 2 | 6.2533 | | 0.5017 | 3.0 | 3 | 6.2101 | | 0.4689 | 4.0 | 4 | 6.3163 | | 0.3604 | 5.0 | 5 | 6.4144 | | 0.3107 | 6.0 | 6 | 6.4127 | | 0.2698 | 7.0 | 7 | 6.8089 | | 0.317 | 8.0 | 8 | 7.3388 | | 0.2228 | 9.0 | 9 | 6.5063 | | 0.1798 | 10.0 | 10 | 5.7073 | | 0.1436 | 11.0 | 11 | 5.1185 | | 0.1183 | 12.0 | 12 | 4.8994 | | 0.1002 | 13.0 | 13 | 4.8033 | | 0.0865 | 14.0 | 14 | 5.1707 | | 0.0758 | 15.0 | 15 | 5.7089 | | 0.0663 | 16.0 | 16 | 6.4052 | | 0.0601 | 17.0 | 17 | 6.7814 | | 0.0545 | 18.0 | 18 | 6.9586 | | 0.0505 | 19.0 | 19 | 6.9766 | | 0.0482 | 20.0 | 20 | 6.9843 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
zhanjun/lora-trained-xl-notion_trans
zhanjun
2024-05-11T05:00:56Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-11T03:27:00Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a notion style picture of a person widget: - text: a notion style cartoon man's face with a black and white outline output: url: image_0.png - text: a notion style cartoon man's face with a black and white outline output: url: image_1.png - text: a notion style cartoon man's face with a black and white outline output: url: image_2.png - text: a notion style cartoon man's face with a black and white outline output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - zhanjun/lora-trained-xl-notion_trans <Gallery /> ## Model description These are zhanjun/lora-trained-xl-notion_trans LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a notion style picture of a person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](zhanjun/lora-trained-xl-notion_trans/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
RichardErkhov/h2oai_-_h2ogpt-gm-7b-mistral-chat-sft-dpo-rag-v1-4bits
RichardErkhov
2024-05-11T04:50:46Z
78
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T04:46:30Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) h2ogpt-gm-7b-mistral-chat-sft-dpo-rag-v1 - bnb 4bits - Model creator: https://huggingface.co/h2oai/ - Original model: https://huggingface.co/h2oai/h2ogpt-gm-7b-mistral-chat-sft-dpo-rag-v1/ Original model description: --- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. - Base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.36.1 ``` ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="h2oai/h2ogpt-gm-7b-mistral-chat-sft-dpo-rag-v1", torch_dtype=torch.bfloat16, device_map="auto", ) # We use the HF Tokenizer chat template to format each message # https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Why is drinking water so healthy?"}, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) res = pipe( prompt, max_new_tokens=256, ) print(res[0]["generated_text"]) # <|system|>You are a friendly chatbot</s><|prompt|>Why is drinking water so healthy?</s><|answer|> Drinking water is healthy for several reasons: [...] ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=1024, bias=False) (v_proj): Linear(in_features=4096, out_features=1024, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear(in_features=4096, out_features=14336, bias=False) (up_proj): Linear(in_features=4096, out_features=14336, bias=False) (down_proj): Linear(in_features=14336, out_features=4096, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
oztrkoguz/phi3_short_story_lora
oztrkoguz
2024-05-11T04:44:37Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-11T04:44:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** oztrkoguz - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ucalyptus/prem-1B-32k
ucalyptus
2024-05-11T04:34:16Z
8
1
peft
[ "peft", "tensorboard", "safetensors", "llama", "generated_from_trainer", "base_model:premai-io/prem-1B", "base_model:adapter:premai-io/prem-1B", "license:apache-2.0", "region:us" ]
null
2024-05-11T04:32:44Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: premai-io/prem-1B model-index: - name: prem-1B-32k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: premai-io/prem-1B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: argilla/distilabel-capybara-dpo-7k-binarized type: orpo.chat_template dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./prem-1B-32k save_safetensors: true sequence_len: 8192 sample_packing: false pad_to_sequence_len: false use_pose: true pose_max_context_len: 262144 min_sample_len: 6144 pose_num_chunks: 16 curriculum_sampling: true overrides_of_model_config: rope_theta: 500000.0 max_position_embeddings: 262144 # peft_use_dora: true adapter: lora peft_use_rslora: true lora_model_dir: lora_r: 1024 lora_alpha: 1024 lora_dropout: 0.1 lora_target_modules: - q_proj - k_proj - v_proj - o_proj lora_modules_to_save: - embed_tokens - lm_head wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 20 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00001 max_grad_norm: 1.0 adam_beta2: 0.95 train_on_inputs: false group_by_length: false bf16: true fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true sdp_attention: s2_attention: warmup_steps: 10 evals_per_epoch: 8 saves_per_epoch: 8 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # prem-1B-32k This model is a fine-tuned version of [premai-io/prem-1B](https://huggingface.co/premai-io/prem-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0059 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7672 | 1.0 | 1 | 3.0074 | | 0.7672 | 2.0 | 2 | 2.6057 | | 0.7422 | 3.0 | 3 | 2.2898 | | 0.7211 | 4.0 | 4 | 2.1453 | | 0.6591 | 5.0 | 5 | 1.6360 | | 0.4514 | 6.0 | 6 | 0.7589 | | 0.24 | 7.0 | 7 | 0.6621 | | 0.1584 | 8.0 | 8 | 0.8121 | | 0.1235 | 9.0 | 9 | 0.7538 | | 0.0998 | 10.0 | 10 | 0.7743 | | 0.0869 | 11.0 | 11 | 0.7771 | | 0.1692 | 12.0 | 12 | 0.8293 | | 0.0702 | 13.0 | 13 | 0.8939 | | 0.063 | 14.0 | 14 | 0.9582 | | 0.0567 | 15.0 | 15 | 0.9825 | | 0.052 | 16.0 | 16 | 0.9960 | | 0.0488 | 17.0 | 17 | 0.9883 | | 0.0457 | 18.0 | 18 | 1.0004 | | 0.0436 | 19.0 | 19 | 1.0056 | | 0.0427 | 20.0 | 20 | 1.0059 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
RichardErkhov/facebook_-_mbart-large-en-ro-8bits
RichardErkhov
2024-05-11T04:33:20Z
78
0
transformers
[ "transformers", "safetensors", "mbart", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T04:25:22Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mbart-large-en-ro - bnb 8bits - Model creator: https://huggingface.co/facebook/ - Original model: https://huggingface.co/facebook/mbart-large-en-ro/ Original model description: --- tags: - translation language: - en - ro license: mit --- ### mbart-large-en-ro This is mbart-large-cc25, finetuned on wmt_en_ro. It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in `romanian_postprocessing.md` Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart Docs: https://huggingface.co/transformers/master/model_doc/mbart.html Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
mradermacher/NeoBase_8b_v0.2-GGUF
mradermacher
2024-05-11T04:21:51Z
105
0
transformers
[ "transformers", "gguf", "cybersecurity", "security", "cybersec", "base", "llama3", "en", "dataset:CyberNative/GeneralCyberSecurityTexts", "dataset:CyberNative/github_cybersecurity_READMEs", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T03:54:47Z
--- base_model: CyberNative/NeoBase_8b_v0.2 datasets: - CyberNative/GeneralCyberSecurityTexts - CyberNative/github_cybersecurity_READMEs language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - cybersecurity - security - cybersec - base - llama3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CyberNative/NeoBase_8b_v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf
RichardErkhov
2024-05-11T04:20:26Z
204
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T02:07:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolphin-2.8-mistral-7b-v02 - GGUF - Model creator: https://huggingface.co/cognitivecomputations/ - Original model: https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02/ | Name | Quant method | Size | | ---- | ---- | ---- | | [dolphin-2.8-mistral-7b-v02.Q2_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q2_K.gguf) | Q2_K | 2.53GB | | [dolphin-2.8-mistral-7b-v02.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [dolphin-2.8-mistral-7b-v02.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_S.gguf) | IQ3_S | 2.96GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [dolphin-2.8-mistral-7b-v02.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_M.gguf) | IQ3_M | 3.06GB | | [dolphin-2.8-mistral-7b-v02.Q3_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K.gguf) | Q3_K | 3.28GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [dolphin-2.8-mistral-7b-v02.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [dolphin-2.8-mistral-7b-v02.Q4_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_0.gguf) | Q4_0 | 3.83GB | | [dolphin-2.8-mistral-7b-v02.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [dolphin-2.8-mistral-7b-v02.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [dolphin-2.8-mistral-7b-v02.Q4_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K.gguf) | Q4_K | 4.07GB | | [dolphin-2.8-mistral-7b-v02.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [dolphin-2.8-mistral-7b-v02.Q4_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_1.gguf) | Q4_1 | 4.24GB | | [dolphin-2.8-mistral-7b-v02.Q5_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_0.gguf) | Q5_0 | 4.65GB | | [dolphin-2.8-mistral-7b-v02.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [dolphin-2.8-mistral-7b-v02.Q5_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K.gguf) | Q5_K | 4.78GB | | [dolphin-2.8-mistral-7b-v02.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [dolphin-2.8-mistral-7b-v02.Q5_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_1.gguf) | Q5_1 | 5.07GB | | [dolphin-2.8-mistral-7b-v02.Q6_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- base_model: alpindale/Mistral-7B-v0.2-hf language: - en license: apache-2.0 datasets: - cognitivecomputations/dolphin - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - jondurbin/airoboros-2.2.1 - teknium/openhermes-2.5 - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction model-index: - name: dolphin-2.8-mistral-7b-v02 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.469 verified: false --- # Dolphin 2.8 Mistral 7b v0.2 🐬 By Eric Hartford and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> My appreciation for the sponsors of Dolphin 2.8: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node - [Winston Sou](https://twitter.com/WinsonDabbles) - Along with a generous anonymous sponsor, donated a massive personally owned compute resource! - [Abacus AI](https://abacus.ai/) - my employer and partner in many things. This model is based on [Mistral-7b-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) a new base model released by MistralAI on March 23, 2024 but they have not yet published on HuggingFace. Thanks to @alpindale for converting / publishing. The base model has 32k context, and the full-weights fine-tune was with 16k sequence lengths. It took 3 days on 10x L40S provided by [Crusoe Cloud](https://crusoe.ai/) Dolphin-2.8 has a variety of instruction, conversational, and coding skills. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use including commercial. Dolphin was trained on data generated from GPT4 among other models. # Evals ``` { "arc_challenge": { "acc,none": 0.5921501706484642, "acc_stderr,none": 0.014361097288449701, "acc_norm,none": 0.6339590443686007, "acc_norm_stderr,none": 0.014077223108470139 }, "gsm8k": { "exact_match,strict-match": 0.4783927217589083, "exact_match_stderr,strict-match": 0.013759618667051773, "exact_match,flexible-extract": 0.5367702805155421, "exact_match_stderr,flexible-extract": 0.013735191956468648 }, "hellaswag": { "acc,none": 0.6389165504879506, "acc_stderr,none": 0.004793330525656218, "acc_norm,none": 0.8338976299541924, "acc_norm_stderr,none": 0.00371411888431746 }, "mmlu": { "acc,none": 0.6122347243982339, "acc_stderr,none": 0.003893774654142997 }, "truthfulqa_mc2": { "acc,none": 0.5189872652778472, "acc_stderr,none": 0.014901128316426086 }, "winogrande": { "acc,none": 0.7971586424625099, "acc_stderr,none": 0.011301439925936643 } } ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: alpindale/Mistral-7B-v0.2-hf model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/datasets/dolphin201-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/m-a-p_Code-Feedback-sharegpt.jsonl type: sharegpt - path: /workspace/datasets/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt.jsonl type: sharegpt - path: /workspace/datasets/not_samantha_norefusals.jsonl type: sharegpt - path: /workspace/datasets/openhermes2_5-sharegpt.jsonl type: sharegpt chat_template: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: /workspace/dolphin-2.8-mistral-7b sequence_len: 16384 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 3 num_epochs: 4 adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000005 optimizer: adamw_bnb_8bit train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 eval_steps: 73 eval_table_size: eval_table_max_new_tokens: eval_sample_packing: false saves_per_epoch: save_steps: 73 save_total_limit: 2 debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" tokens: - "<|im_start|>" ``` </details><br> # workspace/dolphin-2.8-mistral-7b This model is a fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 10 - gradient_accumulation_steps: 8 - total_train_batch_size: 240 - total_eval_batch_size: 30 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1736 | 0.0 | 1 | 1.0338 | | 0.6106 | 0.36 | 73 | 0.5439 | | 0.5766 | 0.72 | 146 | 0.5171 | | 0.5395 | 1.06 | 219 | 0.5045 | | 0.5218 | 1.42 | 292 | 0.4976 | | 0.5336 | 1.78 | 365 | 0.4915 | | 0.5018 | 2.13 | 438 | 0.4885 | | 0.5113 | 2.48 | 511 | 0.4856 | | 0.5066 | 2.84 | 584 | 0.4838 | | 0.4967 | 3.19 | 657 | 0.4834 | | 0.4956 | 3.55 | 730 | 0.4830 | | 0.5026 | 3.9 | 803 | 0.4828 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # Quants - [dagbs/-GGUF](https://huggingface.co/dagbs/dolphin-2.8-mistral-7b-v02-GGUF) - [bartowski/ExLlamaV2](https://huggingface.co/bartowski/dolphin-2.8-mistral-7b-v02-exl2) - [solidrust/AWQ](https://huggingface.co/solidrust/dolphin-2.8-mistral-7b-v02-AWQ)
MysteriousAI/Mia-1B
MysteriousAI
2024-05-11T04:16:23Z
44
5
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "hi", "dataset:OEvortex/vortex-mini", "base_model:OEvortex/HelpingAI-Lite", "base_model:quantized:OEvortex/HelpingAI-Lite", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-05T09:14:21Z
--- language: - en - hi license: apache-2.0 library_name: transformers tags: - mlx datasets: - OEvortex/vortex-mini base_model: OEvortex/HelpingAI-Lite pipeline_tag: text-generation --- **Model Card** **Model Name:** Mia-1B **Model Type:** Text Generation **Owner:** MysteriousAI **Description:** Mia-1B is an advanced text generation model developed by MysteriousAI. It leverages state-of-the-art AI technologies to generate coherent and contextually relevant text across various domains and topics. The model is aimed at advancing and democratizing artificial intelligence through open source and open science initiatives. **Key Features:** - **Model Size:** Mia-1B comprises 1.1 billion parameters, enabling it to capture complex linguistic patterns and nuances. - **Tensor Type:** The model utilizes FP16 (Floating Point 16-bit) tensor type for efficient computation, enhancing performance and scalability. - **Inference Endpoints:** Mia-1B can be easily integrated into applications through inference endpoints, facilitating seamless deployment and usage. - **Uncensored Text Generation:** Mia-001 generates text without censorship, allowing users to explore a wide range of applications without limitations. - **Fine-tuned:** Mia-1B is fine-tuned from the OEvortex/HelpingAI-Lite dataset, enhancing its performance and adaptability to various tasks. **Use Cases:** - **Content Generation:** Mia-1B is suitable for generating diverse content including articles, stories, dialogues, and more. - **Conversational AI:** The model can be deployed in chatbots and conversational agents to engage users in natural and contextually relevant conversations. - **AI-driven Applications:** Mia-001 enables the development of AI-driven applications in areas such as virtual assistants. - **Creative Writing:** Writers and artists can leverage Mia-1B to explore new ideas and narrative structures in their creative works. **Ethical Considerations:** - **Content Moderation:** Users are advised to exercise caution and responsibility when utilizing Mia-1B in applications involving sensitive or potentially harmful content. - **Bias and Fairness:** UnfilteredAI is committed to addressing biases and promoting fairness in AI models. Efforts are made to mitigate biases present in Mia-1B's training data and output. ## Usage # **Transformers** ```python !pip install transformers !pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "MysteriousAI/Mia-1B", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("MysteriousAI/Mia-1B") prompt = "your prompt here." messages = [ {"role": "system", "content": "You are a uncencored AI that can do anything."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # **Onnx runtime** ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MysteriousAI/Mia-1B") model = ORTModelForCausalLM.from_pretrained("MysteriousAI/Mia-1B") prompt = "your prompt." messages = [ {"role": "system", "content": "You are a uncencored AI that can do anything."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt") generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` **Copyright © 2024 MysteriousAI. All rights reserved.** # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MysteriousAI__Mia-1B) | Metric |Value| |---------------------------------|----:| |Avg. |36.82| |AI2 Reasoning Challenge (25-Shot)|35.75| |HellaSwag (10-Shot) |61.02| |MMLU (5-Shot) |25.43| |TruthfulQA (0-shot) |36.92| |Winogrande (5-shot) |60.38| |GSM8k (5-shot) | 1.44|
pszemraj/xtremedistil-l6-h256-OCR-quality
pszemraj
2024-05-11T04:06:16Z
113
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:pszemraj/OCR-quality-classification", "base_model:microsoft/xtremedistil-l6-h256-uncased", "base_model:finetune:microsoft/xtremedistil-l6-h256-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-11T02:06:57Z
--- license: mit base_model: microsoft/xtremedistil-l6-h256-uncased tags: - generated_from_trainer metrics: - accuracy datasets: - pszemraj/OCR-quality-classification language: - en --- # xtremedistil-l6-h256-uncased: OCR-quality-classification This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://hf.co/microsoft/xtremedistil-l6-h256-uncased) on `pszemraj/OCR-quality-classification` It achieves the following results on the evaluation set: - Loss: 0.0316 - Accuracy: 0.994 - Num Input Tokens Seen: 57341952 ## Intended uses & limitations predict whether a document is clean or noisy ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 0.0812 | 0.2660 | 250 | 0.0860 | 0.986 | 8192000 | | 0.0637 | 0.5321 | 500 | 0.0532 | 0.988 | 16384000 | | 0.031 | 0.7981 | 750 | 0.0463 | 0.99 | 24576000 | | 0.0315 | 1.0641 | 1000 | 0.0343 | 0.992 | 32765952 | | 0.0223 | 1.3301 | 1250 | 0.0337 | 0.994 | 40957952 | | 0.0137 | 1.5962 | 1500 | 0.0423 | 0.99 | 49149952 | | 0.0186 | 1.8622 | 1750 | 0.0316 | 0.994 | 57341952 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
kali1/whisper-medium-1000
kali1
2024-05-11T04:04:18Z
92
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-11T04:01:56Z
--- license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer model-index: - name: ko-address-whisper-medium results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ko-address-whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0290 - eval_cer: 0.6820 - eval_runtime: 70330.5853 - eval_samples_per_second: 0.08 - eval_steps_per_second: 0.005 - epoch: 1.5053 - step: 3000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Sardean-UChicago/FOX_Mistral-7B-Headline-QLoRA
Sardean-UChicago
2024-05-11T04:01:47Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T03:58:03Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** Sardean-UChicago - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qminh369/token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper
qminh369
2024-05-11T03:58:47Z
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-11T03:27:36Z
--- license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer model-index: - name: token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 15 | 0.6462 | | No log | 2.0 | 30 | 0.6202 | | No log | 3.0 | 45 | 0.6145 | | No log | 4.0 | 60 | 0.6136 | | No log | 5.0 | 75 | 0.6092 | | No log | 6.0 | 90 | 0.5965 | | No log | 7.0 | 105 | 0.5927 | | No log | 8.0 | 120 | 0.5897 | | No log | 9.0 | 135 | 0.5880 | | No log | 10.0 | 150 | 0.5878 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC
PhoenixStormJr
2024-05-11T03:53:05Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-05-11T03:21:57Z
--- license: mit --- ![image/png](https://huggingface.co/PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC/resolve/main/GlideImage.png) This is Glide's voice from Megaman NT Warrior. This was created with RVC V2, by Rejekts, trained on 300 epochs. If you would like to use the model, go here: https://huggingface.co/PhoenixStormJr/RVC-V2-easy-gui-tutorial Unfortunately, Glide doesn't talk much. Therefore, I had to use Tortoise-TTS, to generate extra speech for Glide. I am NOT going to sit through hours of footage searching for ONE characters voice. If you would like a better model, you go on ahead, and cut the sections of Glide's voice using audacity, or any other audio editing software, and upload the .wav file here. I will remove the background noise, enhance the audio, and train the model all myself, but I will NOT spend hours looking for Glide's voice. I need at least 6 minutes of Glide's speech to train the model. Download Zip model here: https://huggingface.co/PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC/resolve/main/Glide.zip?download=true Download .pth file here: https://huggingface.co/PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC/resolve/main/Glide.pth?download=true Download .index here: https://huggingface.co/PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC/resolve/main/added_IVF398_Flat_nprobe_1_Glide_v2.index?download=true Listen to a sample audio here: <audio controls src="https://huggingface.co/PhoenixStormJr/Glide-Megaman-NT-Warrior-RVC/resolve/main/GlideSample.wav"></audio>
RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-8bits
RichardErkhov
2024-05-11T03:52:36Z
77
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2312.00738", "arxiv:2205.11916", "arxiv:2306.05179", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T03:43:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SeaLLM-7B-v2 - bnb 8bits - Model creator: https://huggingface.co/SeaLLMs/ - Original model: https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/ Original model description: --- license: other license_name: seallms license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE language: - en - zh - vi - id - th - ms - km - lo - my - tl tags: - multilingual - sea --- <p align="center"> <img src="seal_logo.png" width="200" /> </p> # *SeaLLM-7B-v2* - Large Language Models for Southeast Asia # <strong style="color: red">BIG NEWS: <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5">SeaLLM-7B-v2.5</a> is released with state-of-the-art performance in world knowledge and reasoning. SeaLLM-7B-v2 will begin deprecation.</strong> <p align="center"> <a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Technical Blog</a> &nbsp;&nbsp; <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2" target="_blank" rel="noopener"> 🤗 Tech Memo</a> &nbsp;&nbsp; <a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a> &nbsp;&nbsp; <a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a> &nbsp;&nbsp; <a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a> </p> We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. ### Highlights * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. ### Release and DEMO - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). - Model weights: - [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). - [SeaLLM-7B-v2-gguf](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf). - [SeaLLM-7B-v2-GGUF (thanks Lonestriker)](https://huggingface.co/LoneStriker/SeaLLM-7B-v2-GGUF). NOTE: use [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to work properly. - Run locally: - [LM-studio](https://lmstudio.ai/): - [SeaLLM-7B-v2-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q4_0.gguf) and [SeaLLM-7B-v2-q8_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q8_0.gguf). - LM-studio requires this [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to set chat template properly. - [ollama](https://ollama.ai/) `ollama run nxphi47/seallm-7b-v2:q4_0` - [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [mlx-community/SeaLLM-7B-v2-4bit-mlx](https://huggingface.co/mlx-community/SeaLLM-7B-v2-4bit-mlx) <blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. </blockquote> > **Disclaimer**: > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. > The logo was generated by DALL-E 3. ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. ## Evaluation ### Zero-shot CoT Multilingual Math Reasoning [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. ![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png) <details> <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> <br> | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 </details> Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). #### Zero-shot MGSM [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. | Model | MGSM-Zh | MGSM-Th |-----| ----- | --- | ChatGPT (reported) | 61.2 | 47.2 | Qwen-14B-chat | 59.6 | 28 | SeaLLM-7B-v2 | **64.8** | **62.4** ### Zero-shot Commonsense Reasoning We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |-----| ----- | --- | -- | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). ### Multilingual World Knowledge We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). ### MT-Bench On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. | Model | Access | Langs | MT-Bench | --- | --- | --- | --- | | GPT-4-turbo | closed | multi | 9.32 | GPT-4-0613 | closed | multi | 9.18 | Mixtral-8x7b (46B) | open | multi | 8.3 | Starling-LM-7B-alpha | open | mono (en) | 8.0 | OpenChat-3.5-7B | open | mono (en) | 7.81 | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 | Mistral-7B-instuct | open | mono (en) | 6.84 ### Sea-Bench Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. ![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png) Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. ### Usage #### Instruction format ```python prompt = """<|im_start|>system You are a helpful assistant.</s><|im_start|>user Hello world</s><|im_start|>assistant Hi there, how can I help?</s>""" # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! # <|im_start|> is not a special token. # Transformers chat_template should be consistent with vLLM format below. # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] """ ``` #### Using transformers's chat_template ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # use bfloat16 to ensure the best performance. model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello world"}, {"role": "assistant", "content": "Hi there, how can I help you today?"}, {"role": "user", "content": "Explain general relativity in details."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) print(tokenizer.convert_ids_to_tokens(encodeds[0])) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` #### Using vLLM ```python from vllm import LLM, SamplingParams TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" # There is no \n between </s> and <|im_start|>. def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): # conversations: list of dict with key `role` and `content` (openai format) if conversations[0]['role'] != 'system' and system_prompt is not None: conversations = [{"role": "system", "content": system_prompt}] + conversations text = '' for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) text += prompt if add_assistant_prefix: prompt = TURN_PREFIX.format(role='assistant') text += prompt return text sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") message = "Explain general relativity in details." prompt = seallm_chat_convo_format(message, True) gen = llm.generate(prompt, sampling_params) print(gen[0].outputs[0].text) ``` #### Fine-tuning SeaLLM-7B-v2 Should follow the chat format and accurately mask out source tokens. Here is an example. ```python conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): """ Inputs: conversations: list of dict following openai format, eg conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] add_assistant_prefix: whether to add assistant_prefix, only for inference decoding Outputs: tokenize_output_sample, { "input_ids": ... "token_type_ids": 1 if train and 0 if masked out (not train) } During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. labels = sample['input_ids'].clone() labels[sample['token_type_ids'] == 0] = -100 """ TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" sample = None assistant_prefix_len = None for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) turn_sample = tokenizer( prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) if turn['role'] == 'assistant': if assistant_prefix_len is None: assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) if sample is None: sample = turn_sample else: for k in turn_sample.keys(): sample[k].extend(turn_sample[k]) if add_assistant_prefix: assistant_prefix_sample = tokenizer( TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) for k in sample.keys(): sample[k].extend(assistant_prefix_sample[k]) if tokenizer.add_bos_token: sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] sample['attention_mask'] = [1] + sample['attention_mask'] sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] return sample # ! testing sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) print(sample['token_type_ids']) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Acknowledgement to Our Linguists We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. ## Citation If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected]) **Author list and order will change!** * `*` and `^` are equal contributions. ``` @article{damonlpsg2023seallm, author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing}, title = {SeaLLMs - Large Language Models for Southeast Asia}, year = 2023, Eprint = {arXiv:2312.00738}, } ```
netcat420/MFANN3bv0.8
netcat420
2024-05-11T03:49:06Z
11
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-10T05:11:43Z
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 3b version 0.8 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/DaGRoYdRwb_q5ni-2Wh6R.png) fine-tuned on the MFANN dataset as of 5/10/2024 as it is an ever expanding dataset. 63.5 <- Avg 61.52 <- ARC 76.09 <- HellaSwag 55.58 <- MMLU 54.44 <- TruthfulQA 78.53 <- Winogrande (very high score for a 3b model!) 54.81 <- GSM8K
osouza/gemma-portuguese-luana-2b-questoes-string
osouza
2024-05-11T03:48:32Z
136
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T03:45:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QinLiuNLP/mistral-poe-10p-10th-0.8
QinLiuNLP
2024-05-11T03:48:24Z
26
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-10T22:46:26Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: mistral-poe-10p-10th-0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-poe-10p-10th-0.8 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0957 | 1.0 | 1949 | nan | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
learnmer/unsloth-llama-3-8b
learnmer
2024-05-11T03:45:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-10T15:10:27Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** learnmer - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ytzi/starcoder2-3b-rkt-macros-epoch-0.5
ytzi
2024-05-11T03:43:24Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:43:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Litzy619/Phi0503HMA9
Litzy619
2024-05-11T03:32:25Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-10T23:26:54Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: Phi0503HMA9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi0503HMA9 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.4583 | 0.09 | 10 | 0.9323 | | 0.4372 | 0.18 | 20 | 0.2609 | | 0.6807 | 0.27 | 30 | 0.3165 | | 0.2591 | 0.36 | 40 | 0.2379 | | 0.2397 | 0.45 | 50 | 0.2319 | | 0.2086 | 0.54 | 60 | 0.1902 | | 0.1866 | 0.63 | 70 | 0.1773 | | 0.1667 | 0.73 | 80 | 0.1585 | | 0.1097 | 0.82 | 90 | 0.0932 | | 0.0865 | 0.91 | 100 | 0.0821 | | 0.0846 | 1.0 | 110 | 0.0800 | | 0.074 | 1.09 | 120 | 0.0792 | | 0.0682 | 1.18 | 130 | 0.0861 | | 0.0765 | 1.27 | 140 | 0.0778 | | 0.0711 | 1.36 | 150 | 0.0767 | | 0.08 | 1.45 | 160 | 0.0786 | | 0.0725 | 1.54 | 170 | 0.0716 | | 0.07 | 1.63 | 180 | 0.0709 | | 0.0589 | 1.72 | 190 | 0.1346 | | 0.4282 | 1.81 | 200 | 0.1490 | | 0.32 | 1.9 | 210 | 0.1215 | | 0.2609 | 1.99 | 220 | 0.1303 | | 0.0654 | 2.08 | 230 | 0.0749 | | 0.0484 | 2.18 | 240 | 0.0765 | | 0.0417 | 2.27 | 250 | 0.0716 | | 0.0437 | 2.36 | 260 | 0.0718 | | 0.0477 | 2.45 | 270 | 0.0689 | | 0.0379 | 2.54 | 280 | 0.0696 | | 0.037 | 2.63 | 290 | 0.0692 | | 0.0411 | 2.72 | 300 | 0.0689 | | 0.0457 | 2.81 | 310 | 0.0675 | | 0.0408 | 2.9 | 320 | 0.0669 | | 0.0422 | 2.99 | 330 | 0.0673 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
kali1/ko-address-whisper-medium-processor
kali1
2024-05-11T03:29:20Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:29:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Raneechu/litreviewUnsupervised
Raneechu
2024-05-11T03:27:53Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-11T03:27:49Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: litreview results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # litreview This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2986 | 0.0013 | 1 | 3.2847 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1 ## Training procedure ### Framework versions - PEFT 0.6.2
LoneStriker/dolphin-2.9.1-llama-3-8b-8.0bpw-h8-exl2
LoneStriker
2024-05-11T03:26:47Z
8
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-11T03:23:19Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
CultriX/CultMerge-7B-v1
CultriX
2024-05-11T03:26:04Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:CultriX/MonaTrix-7B-DPOv2", "base_model:merge:CultriX/MonaTrix-7B-DPOv2", "base_model:CultriX/NeuralCeptrix-7B-SLERP", "base_model:merge:CultriX/NeuralCeptrix-7B-SLERP", "base_model:CultriX/NeuralTrix-bf16", "base_model:merge:CultriX/NeuralTrix-bf16", "base_model:CultriX/Wernicke-7B-dpo", "base_model:merge:CultriX/Wernicke-7B-dpo", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T03:22:22Z
--- base_model: - CultriX/MonaTrix-7B-DPOv2 - CultriX/NeuralCeptrix-7B-SLERP - CultriX/NeuralTrix-bf16 - CultriX/Wernicke-7B-dpo library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [CultriX/MonaTrix-7B-DPOv2](https://huggingface.co/CultriX/MonaTrix-7B-DPOv2) as a base. ### Models Merged The following models were included in the merge: * [CultriX/NeuralCeptrix-7B-SLERP](https://huggingface.co/CultriX/NeuralCeptrix-7B-SLERP) * [CultriX/NeuralTrix-bf16](https://huggingface.co/CultriX/NeuralTrix-bf16) * [CultriX/Wernicke-7B-dpo](https://huggingface.co/CultriX/Wernicke-7B-dpo) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: CultriX/MonaTrix-7B-DPOv2 - model: CultriX/NeuralCeptrix-7B-SLERP - model: CultriX/NeuralTrix-bf16 - model: CultriX/Wernicke-7B-dpo merge_method: model_stock base_model: CultriX/MonaTrix-7B-DPOv2 dtype: bfloat16 ```
RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf
RichardErkhov
2024-05-11T03:22:02Z
22
0
null
[ "gguf", "arxiv:2308.07317", "arxiv:2307.09288", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:29:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Platypus2-7B - GGUF - Model creator: https://huggingface.co/garage-bAInd/ - Original model: https://huggingface.co/garage-bAInd/Platypus2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Platypus2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q2_K.gguf) | Q2_K | 2.36GB | | [Platypus2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [Platypus2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Platypus2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Platypus2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Platypus2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K.gguf) | Q3_K | 3.07GB | | [Platypus2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Platypus2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Platypus2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Platypus2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_0.gguf) | Q4_0 | 3.56GB | | [Platypus2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [Platypus2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Platypus2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K.gguf) | Q4_K | 3.8GB | | [Platypus2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Platypus2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_1.gguf) | Q4_1 | 3.95GB | | [Platypus2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_0.gguf) | Q5_0 | 4.33GB | | [Platypus2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Platypus2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K.gguf) | Q5_K | 4.45GB | | [Platypus2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [Platypus2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_1.gguf) | Q5_1 | 4.72GB | | [Platypus2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q6_K.gguf) | Q6_K | 5.15GB | Original model description: --- license: cc-by-nc-sa-4.0 language: - en datasets: - garage-bAInd/Open-Platypus --- # Platypus2-7B **NOTE**: There is some issue with LLaMa-2 7B and fine-tuning only works if you use `fp16=False` and `bf16=True` in the HF trainer. Gathering more intel on this but if you have any thoughts about this issue or performance, please let us know! Platypus-7B is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture. ![Platty](./Best_Platty_small.jpeg) ### Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **Platypus2-7B** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English * **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-7B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Platypus2-7B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # change to repo directory cd lm-evaluation-harness # install pip install -e . ``` Each task was evaluated on 1 A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks arc_challenge --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hellaswag --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hendrycksTest-* --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks truthfulqa_mc --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_garage-bAInd__Platypus2-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 45.69 | | ARC (25-shot) | 55.2 | | HellaSwag (10-shot) | 78.84 | | MMLU (5-shot) | 49.83 | | TruthfulQA (0-shot) | 40.64 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 1.82 | | DROP (3-shot) | 20.02 |
LoneStriker/dolphin-2.9.1-llama-3-8b-5.0bpw-h6-exl2
LoneStriker
2024-05-11T03:20:27Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-11T03:18:02Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
LoneStriker/dolphin-2.9.1-llama-3-8b-3.0bpw-h6-exl2
LoneStriker
2024-05-11T03:15:48Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-11T03:14:02Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
lodrick-the-lafted/Fuselage-8B
lodrick-the-lafted
2024-05-11T03:15:43Z
72
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:38:25Z
--- license: apache-2.0 --- <img src=https://huggingface.co/lodrick-the-lafted/Fuselage-8B/resolve/main/fuselage.png> Fuselage-8B It's [Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total](https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total) But with extra ablations where the harmless dataset was OpusStories and the harmful dataset was llm-attacks.
santiblanko/llama8-brute
santiblanko
2024-05-11T03:09:07Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:50:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** santiblanko - **License:** apache-2.0
mradermacher/LuminRP-13B-128k-GGUF
mradermacher
2024-05-11T03:08:21Z
15
1
transformers
[ "transformers", "gguf", "frankenmoe", "merge", "mergekit", "lazymergekit", "en", "base_model:Ppoyaa/LuminRP-13B-128k", "base_model:quantized:Ppoyaa/LuminRP-13B-128k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-10T02:00:52Z
--- base_model: Ppoyaa/LuminRP-13B-128k language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - frankenmoe - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Ppoyaa/LuminRP-13B-128k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF/resolve/main/LuminRP-13B-128k.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
seifooo/names-whisper-en
seifooo
2024-05-11T02:34:35Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-04-30T21:32:13Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: openai/whisper-small metrics: - wer model-index: - name: names-whisper-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # names-whisper-en This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0544 - Wer: 1.9975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0722 | 0.8757 | 1000 | 0.0689 | 2.3877 | | 0.0287 | 1.7513 | 2000 | 0.0569 | 2.0774 | | 0.0134 | 2.6270 | 3000 | 0.0541 | 2.0056 | | 0.0084 | 3.5026 | 4000 | 0.0536 | 2.0440 | | 0.0053 | 4.3783 | 5000 | 0.0544 | 1.9975 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
saaduddinM/Mamba1.4B_mul_large
saaduddinM
2024-05-11T02:33:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-09T12:32:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
awang11/fill50k_diffusion_test
awang11
2024-05-11T02:31:03Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-10T21:09:09Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-awang11/fill50k_diffusion_test These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: red circle with blue background ![images_0)](./images_0.png) prompt: cyan circle with brown floral background ![images_1)](./images_1.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Acopa/ControlNet_SDXL_128
Acopa
2024-05-11T02:16:45Z
4
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-11T01:34:08Z
--- license: openrail++ library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Acopa/ControlNet_SDXL_128 These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Vimax97/sdxl-lora-vimu
Vimax97
2024-05-11T02:15:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T02:15:20Z
--- license: apache-2.0 ---
Acopa/ControlNet_SDXL_256
Acopa
2024-05-11T02:14:15Z
6
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-11T01:33:58Z
--- license: openrail++ library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Acopa/ControlNet_SDXL_256 These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
sosoai/hansoldeco-llama3-8b-v0.1
sosoai
2024-05-11T02:13:34Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T00:53:57Z
base model = mata-ai/llama-3-8b
sree711/tabdetext
sree711
2024-05-11T02:13:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-05T11:45:23Z
--- license: apache-2.0 --- # Model Card for Model ID ## Model Details RT-DETR based model for Table Detection and Structure Recoginition. ### Model Description - **Model type:** Table Detection and Structure Recoginition - **Finetuned from model [optional]:** RT-DETR ### Model Sources - **Repository:** https://github.com/sreesankar711/TabDetExt.git
ddh0/Mixtral-8x7B-Instruct-v0.1-bf16-GGUF
ddh0
2024-05-11T02:11:40Z
11
0
null
[ "gguf", "text-generation", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-10T22:24:03Z
--- license: apache-2.0 pipeline_tag: text-generation --- # Mixtral-8x7B-Instruct-v0.1-bf16-GGUF This is [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), converted to GGUF without changing tensor data type.
tistak/sn6-0
tistak
2024-05-11T02:11:30Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-03T07:24:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pszemraj/MiniLMv2-L6-H384_R-OCR-quality
pszemraj
2024-05-11T02:05:42Z
113
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "data processing", "data filter", "text quality", "en", "dataset:pszemraj/OCR-quality-classification", "base_model:pszemraj/MiniLMv2-L6-H384_R-fineweb-100k", "base_model:finetune:pszemraj/MiniLMv2-L6-H384_R-fineweb-100k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-11T01:48:39Z
--- license: apache-2.0 base_model: pszemraj/MiniLMv2-L6-H384_R-fineweb-100k tags: - data processing - data filter - text quality metrics: - accuracy datasets: - pszemraj/OCR-quality-classification language: - en --- # MiniLMv2-L6-H384_R-OCR-quality This model is a fine-tuned version of [pszemraj/MiniLMv2-L6-H384_R-fineweb-100k](https://hf.co/pszemraj/MiniLMv2-L6-H384_R-fineweb-100k) on `pszemraj/OCR-quality-classification` It achieves the following results on the evaluation set: - Loss: 0.0162 - Accuracy: 0.996 - Num Input Tokens Seen: 61536256 ## Intended uses & limitations predict whether a document is clean or noisy ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 0.0298 | 0.2660 | 250 | 0.0448 | 0.99 | 8192000 | | 0.0141 | 0.5321 | 500 | 0.0330 | 0.99 | 16384000 | | 0.02 | 0.7981 | 750 | 0.0298 | 0.99 | 24576000 | | 0.0085 | 1.0641 | 1000 | 0.0222 | 0.994 | 32765952 | | 0.0174 | 1.3301 | 1250 | 0.0207 | 0.994 | 40957952 | | 0.0104 | 1.5962 | 1500 | 0.0202 | 0.996 | 49149952 | | 0.0237 | 1.8622 | 1750 | 0.0185 | 0.996 | 57341952 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tistak/sn6-1
tistak
2024-05-11T02:03:36Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-03T07:24:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuxinhhh/hbj
yuxinhhh
2024-05-11T01:59:22Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-05-11T01:59:22Z
--- license: bigscience-bloom-rail-1.0 ---
mikeHak/lora-1.5xl-acne-face-01
mikeHak
2024-05-11T01:56:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T01:54:57Z
--- license: apache-2.0 ---
KaggleMasterX/llama3_orpo_5k_ins
KaggleMasterX
2024-05-11T01:52:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:50:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saaduddinM/Mistral7B_mul_small
saaduddinM
2024-05-11T01:45:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:45:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF
mradermacher
2024-05-11T01:36:13Z
93
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:itayl/Hebrew-Mistral-7B-200K-HeQ-Chat", "base_model:quantized:itayl/Hebrew-Mistral-7B-200K-HeQ-Chat", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T23:56:37Z
--- base_model: itayl/Hebrew-Mistral-7B-200K-HeQ-Chat language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/itayl/Hebrew-Mistral-7B-200K-HeQ-Chat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q6_K.gguf) | Q6_K | 6.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-200K-HeQ-Chat-GGUF/resolve/main/Hebrew-Mistral-7B-200K-HeQ-Chat.f16.gguf) | f16 | 15.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AnhDuc2507/model_weight_1
AnhDuc2507
2024-05-11T01:29:14Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-10T11:04:36Z
--- license: cc-by-nc-4.0 base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h tags: - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: model_weight_1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: vi split: None args: vi metrics: - name: Wer type: wer value: 0.1265011243481173 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_weight_1 This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1739 - Wer: 0.1265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:-----:|:---------------:|:------:| | 14.4773 | 1.3928 | 500 | 5.1397 | 1.0002 | | 4.442 | 2.7855 | 1000 | 5.1727 | 1.0 | | 3.9171 | 4.1783 | 1500 | 3.4650 | 0.9913 | | 3.2597 | 5.5710 | 2000 | 2.1658 | 0.8943 | | 2.5676 | 6.9638 | 2500 | 1.4240 | 0.7346 | | 2.0229 | 8.3565 | 3000 | 0.9604 | 0.5685 | | 1.6744 | 9.7493 | 3500 | 0.9651 | 0.4661 | | 1.4788 | 11.1421 | 4000 | 0.7943 | 0.4500 | | 1.3045 | 12.5348 | 4500 | 0.6500 | 0.3282 | | 1.3199 | 13.9276 | 5000 | 0.4307 | 0.3130 | | 1.1017 | 15.3203 | 5500 | 0.7321 | 0.2742 | | 1.0042 | 16.7131 | 6000 | 0.9041 | 0.2408 | | 1.0219 | 18.1058 | 6500 | 0.6662 | 0.2374 | | 0.9303 | 19.4986 | 7000 | 0.7430 | 0.2171 | | 0.8425 | 20.8914 | 7500 | 1.5198 | 0.1954 | | 0.8409 | 22.2841 | 8000 | 0.6491 | 0.1982 | | 0.881 | 23.6769 | 8500 | 0.6060 | 0.1734 | | 0.8061 | 25.0696 | 9000 | 0.4495 | 0.1607 | | 0.7404 | 26.4624 | 9500 | 0.6027 | 0.1630 | | 0.713 | 27.8552 | 10000 | 0.5014 | 0.1542 | | 0.7678 | 29.2479 | 10500 | 0.2076 | 0.1491 | | 0.7059 | 30.6407 | 11000 | 0.2030 | 0.1497 | | 0.6873 | 32.0334 | 11500 | 0.5304 | 0.1390 | | 0.6471 | 33.4262 | 12000 | 0.4658 | 0.1378 | | 0.6007 | 34.8189 | 12500 | 0.1836 | 0.1365 | | 0.6758 | 36.2117 | 13000 | 0.1798 | 0.1314 | | 0.6231 | 37.6045 | 13500 | 0.1793 | 0.1312 | | 0.6034 | 38.9972 | 14000 | 0.1739 | 0.1265 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
casque/FullNelson-10
casque
2024-05-11T01:26:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-11T01:26:32Z
--- license: creativeml-openrail-m ---
casque/Doggystyle-10
casque
2024-05-11T01:26:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-11T01:25:48Z
--- license: creativeml-openrail-m ---
yuweiiizz/whisper-small-taiwanese
yuweiiizz
2024-05-11T01:18:52Z
46
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nan", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-25T17:41:00Z
--- language: - nan license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_1 model-index: - name: Whisper Small Taiwanese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Taiwanese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.9011 - Cer: 50.3995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.6 ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | |:-------------:|:-----:|:----:|:-------:|:---------------:| | 1.1421 | 0.4 | 1000 | 61.1639 | 1.1692 | | 1.0556 | 0.8 | 2000 | 51.7749 | 1.0215 | | 0.7837 | 1.2 | 3000 | 54.1978 | 0.9572 | | 0.7332 | 1.6 | 4000 | 50.3966 | 0.9230 | | 0.6957 | 2.0 | 5000 | 50.5772 | 0.9064 | | 0.6211 | 2.4 | 6000 | 0.9177 | 49.8590 | | 0.5584 | 2.8 | 7000 | 0.8962 | 47.5366 | | 0.3952 | 3.2 | 8000 | 0.9025 | 48.2925 | | 0.4248 | 3.6 | 9000 | 0.9011 | 50.3995 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1
BEE-spoke-data/mega-ar-350m-L3t-v0.08-ultraTBfw
BEE-spoke-data
2024-05-11T01:01:10Z
12
1
transformers
[ "transformers", "safetensors", "mega", "text-generation", "en", "dataset:BEE-spoke-data/UltraTextbooks-2.1-fw_mix", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-07T17:27:18Z
--- license: apache-2.0 metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true temperature: 0.7 repetition_penalty: 1.10 no_repeat_ngram_size: 6 eta_cutoff: 0.0008 renormalize_logits: true widget: - text: My name is El Microondas the Wise, and example_title: El Microondas - text: Kennesaw State University is a public example_title: Kennesaw State University - text: >- Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded example_title: Bungie - text: The Mona Lisa is a world-renowned painting created by example_title: Mona Lisa - text: >- The Harry Potter series, written by J.K. Rowling, begins with the book titled example_title: Harry Potter Series - text: >- Question: I have cities, but no houses. I have mountains, but no trees. I have water, but no fish. What am I? Answer: example_title: Riddle - text: The process of photosynthesis involves the conversion of example_title: Photosynthesis - text: >- Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot example_title: Story Continuation - text: >- Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and another train leaves Station B at 10:00 AM and travels at 80 mph, when will they meet if the distance between the stations is 300 miles? To determine example_title: Math Problem - text: In the context of computer programming, an algorithm is example_title: Algorithm Definition pipeline_tag: text-generation datasets: - BEE-spoke-data/UltraTextbooks-2.1-fw_mix language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mega-ar-350m-L3t-v0.08-ultraTBfw ## Model description This is a pretraining experiment most recently trained on the BEE-spoke-data/UltraTextbooks-2.1-fw_mix dataset. It achieves the following results on the evaluation set: - Loss: 2.0787 - Accuracy: 0.5746 - Num Input Tokens Seen: 3492282368 ## Quick eval Quick eval for: pszemraj/mega-ar-350m-L3t-v0.08-ultraTBfw hf (pretrained=pszemraj/mega-ar-350m-L3t-v0.08-ultraTBfw,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: 0.99999, num_fewshot: None, batch_size: 8 | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr| |--------------|------:|------|-----:|----------|------:|---|-----:| |arc_easy | 1|none | 0|acc | 0.4246|± |0.0139| | | |none | 0|acc_norm | 0.4002|± |0.0138| |boolq | 2|none | 0|acc | 0.5762|± |0.0139| |lambada_openai| 1|none | 0|perplexity|76.7162|± |6.3531| | | |none | 0|acc | 0.2605|± |0.0123| |openbookqa | 1|none | 0|acc | 0.1840|± |0.0173| | | |none | 0|acc_norm | 0.2720|± |0.0199| |piqa | 1|none | 0|acc | 0.6377|± |0.0135| | | |none | 0|acc_norm | 0.6172|± |0.0137| |winogrande | 1|none | 0|acc | 0.5020|± |0.0141| ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 80085 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 2.2572 | 0.0600 | 400 | 2.2462 | 0.5491 | 209715200 | | 2.2173 | 0.1201 | 800 | 2.1939 | 0.5564 | 419430400 | | 2.1992 | 0.1801 | 1200 | 2.1689 | 0.5604 | 629145600 | | 2.1543 | 0.2402 | 1600 | 2.1521 | 0.5632 | 838860800 | | 2.1532 | 0.3002 | 2000 | 2.1401 | 0.5650 | 1048576000 | | 2.1688 | 0.3603 | 2400 | 2.1307 | 0.5663 | 1258291200 | | 2.1443 | 0.4203 | 2800 | 2.1227 | 0.5676 | 1468006400 | | 2.1105 | 0.4804 | 3200 | 2.1158 | 0.5689 | 1677721600 | | 2.1045 | 0.5404 | 3600 | 2.1090 | 0.5700 | 1887436800 | | 2.1181 | 0.6004 | 4000 | 2.1045 | 0.5708 | 2097152000 | | 2.127 | 0.6605 | 4400 | 2.0994 | 0.5716 | 2306867200 | | 2.1265 | 0.7205 | 4800 | 2.0958 | 0.5719 | 2516582400 | | 2.0951 | 0.7806 | 5200 | 2.0909 | 0.5728 | 2726297600 | | 2.0951 | 0.8406 | 5600 | 2.0876 | 0.5733 | 2936012800 | | 2.1335 | 0.9007 | 6000 | 2.0838 | 0.5739 | 3145728000 | | 2.0731 | 0.9607 | 6400 | 2.0802 | 0.5744 | 3355443200 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
eitan3/infini_llama_3_v1
eitan3
2024-05-11T00:59:50Z
11
0
transformers
[ "transformers", "safetensors", "infini_llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:56:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp
fine-tuned
2024-05-11T00:57:31Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Matchmaking", "Relationships", "Love", "Dating", "Marriage", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-11T00:57:16Z
--- license: apache-2.0 datasets: - fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Matchmaking - Relationships - Love - Dating - Marriage --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: professional matchmaking services ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
EthanRhys/Spitz-Current
EthanRhys
2024-05-11T00:53:51Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-05-11T00:50:55Z
--- license: openrail++ ---
nbalepur/LLama-2-70b-Mnemonic-Tokenizer
nbalepur
2024-05-11T00:51:13Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T00:51:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Purusharth/gorilla-espro
Purusharth
2024-05-11T00:48:55Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "base_model:gorilla-llm/gorilla-openfunctions-v2", "base_model:adapter:gorilla-llm/gorilla-openfunctions-v2", "license:apache-2.0", "region:us" ]
null
2024-05-08T06:11:48Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: gorilla-llm/gorilla-openfunctions-v2 model-index: - name: gorilla-espro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gorilla-espro This model is a fine-tuned version of [gorilla-llm/gorilla-openfunctions-v2](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Holarissun/RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4
Holarissun
2024-05-11T00:43:16Z
0
0
peft
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "license:gemma", "region:us" ]
null
2024-05-11T00:43:11Z
--- license: gemma library_name: peft tags: - trl - reward-trainer - generated_from_trainer base_model: google/gemma-7b metrics: - accuracy model-index: - name: RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4 This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5504 - Accuracy: 0.7435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4935 | 1.0 | 11168 | 0.5452 | 0.7435 | | 0.4459 | 2.0 | 22336 | 0.5504 | 0.7435 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
acrobatlm/e2erag-arctic-m
acrobatlm
2024-05-11T00:39:36Z
84
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
feature-extraction
2024-05-10T00:06:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
realtreetune/rho-interpreter-1b-sft-MATH
realtreetune
2024-05-11T00:34:13Z
140
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T00:32:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ikura31/mistral_docs_sum_p1_full
ikura31
2024-05-11T00:32:16Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T16:01:06Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: mistral_docs_sum_p1_full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_docs_sum_p1_full This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1167 | 0.0277 | 200 | 2.1333 | | 2.3428 | 0.0553 | 400 | 1.6966 | | 1.3784 | 0.0830 | 600 | 1.4972 | | 1.456 | 0.1107 | 800 | 1.3942 | | 1.3227 | 0.1383 | 1000 | 1.3084 | | 1.2535 | 0.1660 | 1200 | 1.2001 | | 1.0612 | 0.1937 | 1400 | 1.0451 | | 0.8815 | 0.2213 | 1600 | 0.9632 | | 0.8971 | 0.2490 | 1800 | 0.9132 | | 0.7908 | 0.2767 | 2000 | 0.8712 | | 0.7549 | 0.3043 | 2200 | 0.8309 | | 0.8099 | 0.3320 | 2400 | 0.8058 | | 0.6891 | 0.3597 | 2600 | 0.7879 | | 0.5204 | 0.3873 | 2800 | 0.7684 | | 0.6249 | 0.4150 | 3000 | 0.7515 | | 0.6764 | 0.4427 | 3200 | 0.7342 | | 0.6996 | 0.4703 | 3400 | 0.7214 | | 0.6371 | 0.4980 | 3600 | 0.7084 | | 0.6694 | 0.5257 | 3800 | 0.6951 | | 0.7048 | 0.5533 | 4000 | 0.6845 | | 0.7265 | 0.5810 | 4200 | 0.6778 | | 0.5663 | 0.6087 | 4400 | 0.6657 | | 0.6222 | 0.6363 | 4600 | 0.6595 | | 0.6463 | 0.6640 | 4800 | 0.6488 | | 0.5754 | 0.6917 | 5000 | 0.6410 | | 0.6208 | 0.7193 | 5200 | 0.6363 | | 0.5613 | 0.7470 | 5400 | 0.6275 | | 0.6316 | 0.7747 | 5600 | 0.6227 | | 0.6564 | 0.8023 | 5800 | 0.6159 | | 0.633 | 0.8300 | 6000 | 0.6077 | | 0.5268 | 0.8577 | 6200 | 0.6022 | | 0.4166 | 0.8853 | 6400 | 0.5978 | | 0.6539 | 0.9130 | 6600 | 0.5926 | | 0.5695 | 0.9407 | 6800 | 0.5875 | | 0.6358 | 0.9683 | 7000 | 0.5845 | | 0.5318 | 0.9960 | 7200 | 0.5829 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Holarissun/dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06
Holarissun
2024-05-11T00:27:26Z
2
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-05-11T00:27:24Z
--- license: mit library_name: peft tags: - trl - dpo - generated_from_trainer base_model: gpt2 model-index: - name: dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Holarissun/dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05
Holarissun
2024-05-11T00:22:59Z
1
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-05-11T00:22:57Z
--- license: mit library_name: peft tags: - trl - dpo - generated_from_trainer base_model: gpt2 model-index: - name: dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
overloud/celinho
overloud
2024-05-11T00:21:33Z
0
0
null
[ "license:cc-by-nc-2.0", "region:us" ]
null
2024-05-11T00:20:12Z
--- license: cc-by-nc-2.0 ---
HarshilPatel1905/flant5_peft_model_emotion_detection
HarshilPatel1905
2024-05-11T00:17:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T00:17:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf
RichardErkhov
2024-05-11T00:17:18Z
37
0
null
[ "gguf", "arxiv:2310.04799", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T21:41:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-Open-Ko-8B - GGUF - Model creator: https://huggingface.co/beomi/ - Original model: https://huggingface.co/beomi/Llama-3-Open-Ko-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-Open-Ko-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-Open-Ko-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-Open-Ko-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-Open-Ko-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-Open-Ko-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-Open-Ko-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-Open-Ko-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-Open-Ko-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-Open-Ko-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-Open-Ko-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-Open-Ko-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-Open-Ko-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-Open-Ko-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-Open-Ko-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-Open-Ko-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-Open-Ko-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-Open-Ko-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-Open-Ko-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-Open-Ko-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-Open-Ko-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-Open-Ko-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - en - ko license: other tags: - facebook - meta - pytorch - llama - llama-3 - llama-3-ko pipeline_tag: text-generation license_name: llama3 license_link: LICENSE --- > Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) > Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ## Model Details **Llama-3-Open-Ko-8B** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. **Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Meta Llama-3** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-Open-Ko comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-Open-Ko </td> <td rowspan="2" >Same as *Open-Solar-Ko Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >17.7B+ </td> <td>Jun, 2023 </td> </tr> </table> *You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus **Model Release Date** 2024.04.24. **Status** This is a static model trained on an offline dataset. **License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions **Llama-3-Open-Ko** ``` @article{llama3openko, title={Llama-3-Open-Ko}, author={L, Junbum}, year={2024}, url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
rvukasin/dqn-SpaceInvadersNoFrameskip-v4
rvukasin
2024-05-11T00:16:47Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-11T00:16:11Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 498.50 +/- 244.72 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rvukasin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rvukasin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rvukasin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Acopa/sdxl_controlnet_deep_fash_test
Acopa
2024-05-11T00:04:22Z
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-10T22:26:32Z
--- license: openrail++ library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Acopa/sdxl_controlnet These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ayaaa123/my_awesome_clm_model
ayaaa123
2024-05-10T23:58:13Z
62
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:57:10Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_keras_callback model-index: - name: ayaaa123/my_awesome_clm_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayaaa123/my_awesome_clm_model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.6000 - Validation Loss: 4.0725 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.6000 | 4.0725 | 0 | ### Framework versions - Transformers 4.40.2 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
fine-tuned/jina-embeddings-v2-base-code-11_05_2024-hbxc-webapp
fine-tuned
2024-05-10T23:55:24Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "RAG", "Retrieval", "Augmented", "Generation", "Knowledge", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-code-11_05_2024-hbxc-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-10T23:55:07Z
--- license: apache-2.0 datasets: - fine-tuned/jina-embeddings-v2-base-code-11_05_2024-hbxc-webapp - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - RAG - Retrieval - Augmented - Generation - Knowledge --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-code**](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) designed for the following use case: AI framework for improving LLM responses ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-code-11_05_2024-hbxc-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Acopa/sdxl_turbo_lora_test
Acopa
2024-05-10T23:52:26Z
3
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-03T15:06:21Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: stabilityai/sdxl-turbo inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Acopa/sdxl_turbo_lora_test These are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Passion1987/corgy_car_LoRA
Passion1987
2024-05-10T23:51:44Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-10T19:14:35Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - dora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - dora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of car widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Passion1987/corgy_car_LoRA <Gallery /> ## Model description These are Passion1987/corgy_car_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Passion1987/corgy_car_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
JohnReese/CustomEpicMix
JohnReese
2024-05-10T23:46:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-19T20:43:02Z
--- license: creativeml-openrail-m ---
Litzy619/Phi0503HMA10
Litzy619
2024-05-10T23:43:41Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-10T19:43:56Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: Phi0503HMA10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi0503HMA10 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.5505 | 0.09 | 10 | 1.2821 | | 0.516 | 0.18 | 20 | 0.2954 | | 0.2842 | 0.27 | 30 | 0.3173 | | 0.3169 | 0.36 | 40 | 0.4883 | | 0.4946 | 0.45 | 50 | 0.2295 | | 0.6098 | 0.54 | 60 | 1.0244 | | 0.567 | 0.63 | 70 | 0.2381 | | 0.2346 | 0.73 | 80 | 0.2265 | | 0.2844 | 0.82 | 90 | 0.2214 | | 4.2973 | 0.91 | 100 | 2.3953 | | 1.7624 | 1.0 | 110 | 1.0186 | | 0.7104 | 1.09 | 120 | 0.4461 | | 0.3678 | 1.18 | 130 | 0.2999 | | 0.2858 | 1.27 | 140 | 0.2034 | | 0.224 | 1.36 | 150 | 0.1894 | | 0.2127 | 1.45 | 160 | 0.2045 | | 0.2229 | 1.54 | 170 | 0.1843 | | 0.1846 | 1.63 | 180 | 0.1824 | | 0.1745 | 1.72 | 190 | 0.1665 | | 0.1676 | 1.81 | 200 | 0.1567 | | 0.1583 | 1.9 | 210 | 0.1572 | | 0.1475 | 1.99 | 220 | 0.1532 | | 0.1529 | 2.08 | 230 | 0.1466 | | 0.1481 | 2.18 | 240 | 0.1453 | | 0.1474 | 2.27 | 250 | 0.1497 | | 0.1479 | 2.36 | 260 | 0.1471 | | 0.1404 | 2.45 | 270 | 0.1438 | | 0.1457 | 2.54 | 280 | 0.1444 | | 0.147 | 2.63 | 290 | 0.1451 | | 0.1451 | 2.72 | 300 | 0.1444 | | 0.1508 | 2.81 | 310 | 0.1442 | | 0.1447 | 2.9 | 320 | 0.1439 | | 0.1425 | 2.99 | 330 | 0.1439 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0