modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 18:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 18:24:50
card
stringlengths
11
1.01M
cyixiao/qwen-1.5B-arc-easy
cyixiao
2024-09-25T21:51:31Z
77
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T21:47:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cyixiao/qwen-1.5B-arc-challenge
cyixiao
2024-09-25T21:46:13Z
77
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T21:40:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
onnx-models/all-MiniLM-L6-v2-fine-tuned-epochs-8-onnx
onnx-models
2024-09-25T21:44:16Z
6
0
light-embed
[ "light-embed", "onnx", "bert", "sentence-transformers", "feature-extraction", "sentence-similarity", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-25T21:43:33Z
--- library_name: light-embed pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # onnx-models/all-MiniLM-L6-v2-fine-tuned-epochs-8-onnx This is the ONNX-ported version of the [event-nlp/all-MiniLM-L6-v2-fine-tuned-epochs-8](https://huggingface.co/event-nlp/all-MiniLM-L6-v2-fine-tuned-epochs-8) for generating text embeddings. ## Model details - Embedding dimension: 384 - Max sequence length: 256 - File size on disk: 0.08 GB - Modules incorporated in the onnx: Transformer, Pooling, Normalize <!--- Describe your model here --> ## Usage Using this model becomes easy when you have [light-embed](https://pypi.org/project/light-embed/) installed: ``` pip install -U light-embed ``` Then you can use the model by specifying the *original model name* like this: ```python from light_embed import TextEmbedding sentences = [ "This is an example sentence", "Each sentence is converted" ] model = TextEmbedding('event-nlp/all-MiniLM-L6-v2-fine-tuned-epochs-8') embeddings = model.encode(sentences) print(embeddings) ``` or by specifying the *onnx model name* like this: ```python from light_embed import TextEmbedding sentences = [ "This is an example sentence", "Each sentence is converted" ] model = TextEmbedding('onnx-models/all-MiniLM-L6-v2-fine-tuned-epochs-8-onnx') embeddings = model.encode(sentences) print(embeddings) ``` ## Citing & Authors Binh Nguyen / [email protected]
riiwang/chinese-roberta-wwm-ext_lr_5e-05_batch_8_epoch_3_model_span_selector
riiwang
2024-09-25T21:29:43Z
106
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:hfl/chinese-roberta-wwm-ext", "base_model:finetune:hfl/chinese-roberta-wwm-ext", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-09-25T20:00:25Z
--- library_name: transformers license: apache-2.0 base_model: hfl/chinese-roberta-wwm-ext tags: - generated_from_trainer model-index: - name: chinese-roberta-wwm-ext_lr_5e-05_batch_8_epoch_3_model_span_selector results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-roberta-wwm-ext_lr_5e-05_batch_8_epoch_3_model_span_selector This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF
legraphista
2024-09-25T21:28:49Z
215
0
gguf
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "quantized", "GGUF", "quantization", "imat", "imatrix", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us", "conversational" ]
text-generation
2024-09-25T21:23:12Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct extra_gated_button_content: Submit extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n\u201CAgreement\u201D means the terms and\ \ conditions for use, reproduction, distribution and modification of the Llama\ \ Materials set forth herein.\n\n\u201CDocumentation\u201D means the specifications,\ \ manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n\u201CLicensee\u201D or \u201Cyou\u201D means you, or your employer or any other\ \ person or entity (if you are entering into this Agreement on such person or entity\u2019\ s behalf), of the age required under applicable laws, rules or regulations to provide\ \ legal consent and that has legal authority to bind your employer or such other\ \ person or entity if you are entering in this Agreement on their behalf.\n\n\u201C\ Llama 3.2\u201D means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\ \n\u201CLlama Materials\u201D means, collectively, Meta\u2019s proprietary Llama\ \ 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\ \n\u201CMeta\u201D or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you\ \ are located in or, if you are an entity, your principal place of business is\ \ in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside\ \ of the EEA or Switzerland). \n\nBy clicking \u201CI Accept\u201D below or by using\ \ or distributing any portion or element of the Llama Materials, you agree to be\ \ bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of\ \ Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free\ \ limited license under Meta\u2019s intellectual property or other rights owned\ \ by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create\ \ derivative works of, and make modifications to the Llama Materials. \nb. Redistribution\ \ and Use. \ni. If you distribute or make available the Llama Materials (or any\ \ derivative works thereof), or a product or service (including another AI model)\ \ that contains any of them, you shall (A) provide a copy of this Agreement with\ \ any such Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include \u201CLlama\u201D at the beginning of\ \ any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you. \niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a \u201CNotice\u201D text file distributed as a part of such copies: \u201C\ Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright \xA9 Meta\ \ Platforms, Inc. All Rights Reserved.\u201D\niv. Your use of the Llama Materials\ \ must comply with applicable laws and regulations (including trade compliance laws\ \ and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\ \ (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated\ \ by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on\ \ the Llama 3.2 version release date, the monthly active users of the products or\ \ services made available by or for Licensee, or Licensee\u2019s affiliates, is\ \ greater than 700 million monthly active users in the preceding calendar month,\ \ you must request a license from Meta, which Meta may grant to you in its sole\ \ discretion, and you are not authorized to exercise any of the rights under this\ \ Agreement unless or until Meta otherwise expressly grants you such rights.\n3.\ \ Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\ \ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS,\ \ WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND,\ \ BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE,\ \ NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE\ \ SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER\ \ IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF\ \ THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ \u201CLlama\u201D (the \u201CMark\u201D) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines\ \ (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/).\ \ All goodwill arising out of your use of the Mark will inure to the benefit of\ \ Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives\ \ made by or for Meta, with respect to any derivative works and modifications of\ \ the Llama Materials that are made by you, as between you and Meta, you are and\ \ will be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (\u201C**Policy**\u201D). The most recent copy of this policy can be found at\ \ [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals\u2019 identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta\_\n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement\_\n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software \u201Cbug,\u201D\ \ or other problems that could lead to a violation of this Policy through one of\ \ the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" inference: false language: - en - de - fr - it - pt - hi - es - th library_name: gguf license: llama3.2 pipeline_tag: text-generation quantized_by: legraphista tags: - facebook - meta - pytorch - llama - llama-3 - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # Llama-3.2-1B-Instruct-IMat-GGUF _Llama.cpp imatrix quantization of meta-llama/Llama-3.2-1B-Instruct_ Original Model: [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3825](https://github.com/ggerganov/llama.cpp/releases/tag/b3825) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: βœ… Available Link: [here](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Llama-3.2-1B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q8_0.gguf) | Q8_0 | 1.32GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q6_K.gguf) | Q6_K | 1.02GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K.gguf) | Q4_K | 807.69MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K.gguf) | Q3_K | 690.84MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K.gguf) | Q2_K | 580.87MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Llama-3.2-1B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.BF16.gguf) | BF16 | 2.48GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.FP16.gguf) | F16 | 2.48GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q8_0.gguf) | Q8_0 | 1.32GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q6_K.gguf) | Q6_K | 1.02GB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q5_K.gguf) | Q5_K | 911.50MB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q5_K_S.gguf) | Q5_K_S | 892.56MB | βœ… Available | βšͺ Static | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K.gguf) | Q4_K | 807.69MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K_S.gguf) | Q4_K_S | 775.65MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ4_NL.gguf) | IQ4_NL | 773.03MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ4_XS.gguf) | IQ4_XS | 743.14MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K.gguf) | Q3_K | 690.84MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K_L.gguf) | Q3_K_L | 732.52MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K_S.gguf) | Q3_K_S | 641.69MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_M.gguf) | IQ3_M | 657.29MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_S.gguf) | IQ3_S | 643.92MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_XS.gguf) | IQ3_XS | 621.11MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 562.11MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K.gguf) | Q2_K | 580.87MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K_S.gguf) | Q2_K_S | 554.66MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_M.gguf) | IQ2_M | 515.45MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_S.gguf) | IQ2_S | 488.71MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_XS.gguf) | IQ2_XS | 475.87MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 447.03MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ1_M.gguf) | IQ1_M | 413.61MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | [Llama-3.2-1B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ1_S.gguf) | IQ1_S | 393.55MB | βœ… Available | 🟒 IMatrix | πŸ“¦ No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Llama-3.2-1B-Instruct-IMat-GGUF --include "Llama-3.2-1B-Instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Llama-3.2-1B-Instruct-IMat-GGUF --include "Llama-3.2-1B-Instruct.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 26 Jul 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|> {next_user_prompt}<|eot_id|> ``` ### Chat template with system prompt ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 26 Jul 2024 {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|> {next_user_prompt}<|eot_id|> ``` ### Llama.cpp ``` llama.cpp/main -m Llama-3.2-1B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Llama-3.2-1B-Instruct.Q8_0`) 3. Run `gguf-split --merge Llama-3.2-1B-Instruct.Q8_0/Llama-3.2-1B-Instruct.Q8_0-00001-of-XXXXX.gguf Llama-3.2-1B-Instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
1024m/OLMoE-1B-7B-0924-Instruct-Base
1024m
2024-09-25T21:13:05Z
12
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "moe", "olmo", "conversational", "en", "dataset:allenai/ultrafeedback_binarized_cleaned", "arxiv:2409.02060", "base_model:allenai/OLMoE-1B-7B-0924-SFT", "base_model:finetune:allenai/OLMoE-1B-7B-0924-SFT", "license:apache-2.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T21:05:24Z
--- license: apache-2.0 language: - en tags: - moe - olmo - olmoe co2_eq_emissions: 1 datasets: - allenai/ultrafeedback_binarized_cleaned base_model: allenai/OLMoE-1B-7B-0924-SFT library_name: transformers --- <img alt="OLMoE Logo." src="olmoe-logo.png" width="250px"> # Model Summary > OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924) that has been adapted via SFT and DPO from [OLMoE-1B-7B](https://hf.co/allenai/OLMoE-1B-7B-0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B-Chat. OLMoE is 100% open-source. This information and more can also be found on the [**OLMoE GitHub repository**](https://github.com/allenai/OLMoE). - **Paper**: https://arxiv.org/abs/2409.02060 - **Pretraining** [Checkpoints](https://hf.co/allenai/OLMoE-1B-7B-0924), [Code](https://github.com/allenai/OLMo/tree/Muennighoff/MoE), [Data](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) and [Logs](https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3). - **SFT (Supervised Fine-Tuning)** [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), [Code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [Data](https://hf.co/datasets/allenai/tulu-v3.1-mix-preview-4096-OLMoE) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-sft-logs.txt). - **DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization)**, [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct), [Preference Data](https://hf.co/datasets/allenai/ultrafeedback_binarized_cleaned), [DPO code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [KTO code](https://github.com/Muennighoff/kto/blob/master/kto.py) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt). # Use Install `transformers` **from source** until a release after [this PR](https://github.com/huggingface/transformers/pull/32406) & `torch` and run: ```python from transformers import OlmoeForCausalLM, AutoTokenizer import torch DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # Load different ckpts via passing e.g. `revision=kto` model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924-Instruct").to(DEVICE) tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924-Instruct") messages = [{"role": "user", "content": "Explain to me like I'm five what is Bitcoin."}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(DEVICE) out = model.generate(inputs, max_length=100) print(tokenizer.decode(out[0])) """ <|endoftext|><|user|> Explain to me like I'm five what is Bitcoin. <|assistant|> Bitcoin is like a special kind of money that you can use to buy things online. But unlike regular money, like dollars or euros, Bitcoin isn't printed by governments or banks. Instead, it's created by a special computer program that helps people keep track of it. Here's how it works: imagine you have a bunch of toys, and you want to """ ``` Branches: - `main`: Preference tuned via DPO model of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT (`main` branch) - `load-balancing`: Ablation with load balancing loss during DPO starting from the `load-balancing` branch of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT - `non-annealed`: Ablation starting from the `non-annealed` branch of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT which is an SFT of the pretraining checkpoint prior to annealing (branch `step1200000-tokens5033B` of https://hf.co/allenai/OLMoE-1B-7B-0924) - `kto`: Ablation using KTO instead of DPO. This branch is the checkpoint after 5,000 steps with the RMS optimizer. The other `kto*` branches correspond to the other checkpoints mentioned in the paper. # Evaluation Snapshot | Task (β†’) | MMLU | GSM8k | BBH | Human-Eval | Alpaca-Eval 1.0 | XSTest | IFEval | Avg | |---------------|------|-------|------|------------|-----------------|--------|--------|------| | **Setup (β†’)** | 0-shot | 8-shot CoT | 3-shot | 0-shot | 0-shot | 0-shot | 0-shot | | | **Metric (β†’)** | EM | EM | EM | Pass@10 | %win | F1 | Loose Acc | | | | | | | | | | | | | OLMo-1B (0724) | 25.0 | 7.0 | 22.5 | 16.0 | - | 67.6 | 20.5 | - | | +SFT | 36.0 | 12.5 | 27.2 | 21.2 | 41.5 | 81.9 | 26.1 | 35.9 | | +DPO | 36.7 | 12.5 | 30.6 | 22.0 | 50.9 | 79.8 | 24.2 | 37.4 | | OLMo-7B (0724) | 50.8 | 32.5 | 36.9 | 32.3 | - | 80.8 | 19.6 | - | | +SFT | 54.2 | 25.0 | 35.7 | 38.5 | 70.9 | 86.1 | 39.7 | 49.3 | | +DPO | 52.8 | 9.0 | 16.6 | 35.0 | 83.5 | **87.5** | 37.9 | 49.1 | | JetMoE-2B-9B | 45.6 | 43.0 | 37.2 | 54.6 | - | 68.2 | 20.0 | - | | +SFT | 46.1 | 53.5 | 35.6 | 64.8 | 69.3 | 55.6 | 30.5 | 50.4 | | DeepSeek-3B-16B | 37.7 | 18.5 | 39.4 | 48.3 | - | 65.9 | 13.5 | - | | +Chat | 48.5 | 46.5 | **40.8** | **70.1** | 74.8 | 85.6 | 32.3 | 57.0 | | Qwen1.5-3B-14B | **60.4** | 13.5 | 27.2 | 60.2 | - | 73.4 | 20.9 | - | | +Chat | 58.9 | **55.5** | 21.3 | 59.7 | 83.9 | 85.6 | 36.2 | 57.3 | | **OLMoE (This Model)** | 49.8 | 3.0 | 33.6 | 22.4 | - | 59.7 | 16.6 | - | | **+SFT** | 51.4 | 40.5 | 38.0 | 51.6 | 69.2 | 84.1 | 43.3 | 54.0 | | **+DPO** | 51.9 | 45.5 | 37.0 | 54.8 | **84.0** | 82.6 | **48.1** | **57.7** | # Citation ```bibtex @misc{muennighoff2024olmoeopenmixtureofexpertslanguage, title={OLMoE: Open Mixture-of-Experts Language Models}, author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi}, year={2024}, eprint={2409.02060}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.02060}, } ```
shuyuej/Llama-3.2-1B-GPTQ
shuyuej
2024-09-25T21:12:25Z
62
0
null
[ "safetensors", "llama", "license:apache-2.0", "4-bit", "gptq", "region:us" ]
null
2024-09-25T21:08:28Z
--- license: apache-2.0 --- # The Quantized LLaMA 3.2 1B Model Original Base Model: `meta-llama/Llama-3.2-1B`.<br> Link: [https://huggingface.co/meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) ## Quantization Configurations ``` "quantization_config": { "batch_size": 1, "bits": 4, "block_name_to_quantize": null, "cache_block_outputs": true, "damp_percent": 0.1, "dataset": null, "desc_act": false, "exllama_config": { "version": 1 }, "group_size": 128, "max_input_length": null, "model_seqlen": null, "module_name_preceding_first_block": null, "modules_in_block_to_quantize": null, "pad_token_id": null, "quant_method": "gptq", "sym": true, "tokenizer": null, "true_sequential": true, "use_cuda_fp16": false, "use_exllama": true }, ``` ## Source Codes Source Codes: [https://github.com/vkola-lab/medpodgpt/tree/main/quantization](https://github.com/vkola-lab/medpodgpt/tree/main/quantization).
Nigam/NQuestion
Nigam
2024-09-25T21:01:49Z
127
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T17:48:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
colesmcintosh/Llama-3.2-1B-Instruct-8bit
colesmcintosh
2024-09-25T21:00:37Z
63
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-25T20:59:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1024m/OLMoE-1B-7B-0924-Base
1024m
2024-09-25T20:58:11Z
5
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "moe", "olmo", "en", "dataset:allenai/OLMoE-mix-0924", "arxiv:2409.02060", "license:apache-2.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T20:49:24Z
--- license: apache-2.0 language: - en tags: - moe - olmo - olmoe co2_eq_emissions: 1 datasets: - allenai/OLMoE-mix-0924 library_name: transformers --- <img alt="OLMoE Logo." src="olmoe-logo.png" width="250px"> # Model Summary > OLMoE-1B-7B is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B. OLMoE is 100% open-source. This information and more can also be found on the [**OLMoE GitHub repository**](https://github.com/allenai/OLMoE). - **Paper**: https://arxiv.org/abs/2409.02060 - **Pretraining** [Checkpoints](https://hf.co/allenai/OLMoE-1B-7B-0924), [Code](https://github.com/allenai/OLMo/tree/Muennighoff/MoE), [Data](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) and [Logs](https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3). - **SFT (Supervised Fine-Tuning)** [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), [Code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [Data](https://hf.co/datasets/allenai/tulu-v3.1-mix-preview-4096-OLMoE) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-sft-logs.txt). - **DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization)**, [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct), [Preference Data](https://hf.co/datasets/allenai/ultrafeedback_binarized_cleaned), [DPO code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [KTO code](https://github.com/Muennighoff/kto/blob/master/kto.py) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt). # Use Install `transformers` **from source** until a release after [this PR](https://github.com/huggingface/transformers/pull/32406) & `torch` and run: ```python from transformers import OlmoeForCausalLM, AutoTokenizer import torch DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # Load different ckpts via passing e.g. `revision=step10000-tokens41B` model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924").to(DEVICE) tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924") inputs = tokenizer("Bitcoin is", return_tensors="pt") inputs = {k: v.to(DEVICE) for k, v in inputs.items()} out = model.generate(**inputs, max_length=64) print(tokenizer.decode(out[0])) # > # Bitcoin is a digital currency that is created and held electronically. No one controls it. Bitcoins aren’t printed, like dollars or euros – they’re produced by people and businesses running computers all around the world, using software that solves mathematical ``` You can list all revisions/branches by installing `huggingface-hub` & running: ```python from huggingface_hub import list_repo_refs out = list_repo_refs("OLMoE/OLMoE-1B-7B-0924") branches = [b.name for b in out.branches] ``` Important branches: - `step1200000-tokens5033B`: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them. - `main`: Checkpoint annealed from `step1200000-tokens5033B` for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT & https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct). - `fp32`: FP32 version of `main`. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for `main` you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks. # Evaluation Snapshot | Model | Active Params | Open Data | MMLU | HellaSwag | ARC-Chall. | ARC-Easy | PIQA | WinoGrande | |-----------------------------|---------------|-----------|------|-----------|------------|----------|------|------------| | **LMs with ~1B active parameters** | | | | | | | | | | **OLMoE-1B-7B** | **1.3B** | **βœ…** | **54.1** | **80.0** | **62.1** | **84.2** | **79.8** | **70.2** | | DCLM-1B | 1.4B | βœ… | 48.5 | 75.1 | 57.6 | 79.5 | 76.6 | 68.1 | | TinyLlama-1B | 1.1B | βœ… | 33.6 | 60.8 | 38.1 | 69.5 | 71.7 | 60.1 | | OLMo-1B (0724) | 1.3B | βœ… | 32.1 | 67.5 | 36.4 | 53.5 | 74.0 | 62.9 | | Pythia-1B | 1.1B | βœ… | 31.1 | 48.0 | 31.4 | 63.4 | 68.9 | 52.7 | | **LMs with ~2-3B active parameters** | | | | | | | | | | Qwen1.5-3B-14B | 2.7B | ❌ | **62.4** | 80.0 | **77.4** | **91.6** | **81.0** | 72.3 | | Gemma2-3B | 2.6B | ❌ | 53.3 | 74.6 | 67.5 | 84.3 | 78.5 | 71.8 | | JetMoE-2B-9B | 2.2B | ❌ | 49.1 | **81.7** | 61.4 | 81.9 | 80.3 | 70.7 | | DeepSeek-3B-16B | 2.9B | ❌ | 45.5 | 80.4 | 53.4 | 82.7 | 80.1 | **73.2** | | StableLM-2B | 1.6B | ❌ | 40.4 | 70.3 | 50.6 | 75.3 | 75.6 | 65.8 | | OpenMoE-3B-9B | 2.9B | βœ… | 27.4 | 44.4 | 29.3 | 50.6 | 63.3 | 51.9 | | **LMs with ~7-9B active parameters** | | | | | | | | | | Gemma2-9B | 9.2B | ❌ | **70.6** | **87.3** | **89.5** | **95.5** | **86.1** | **78.8** | | Llama3.1-8B | 8.0B | ❌ | 66.9 | 81.6 | 79.5 | 91.7 | 81.1 | 76.6 | | DCLM-7B | 6.9B | βœ… | 64.4 | 82.3 | 79.8 | 92.3 | 80.1 | 77.3 | | Mistral-7B | 7.3B | ❌ | 64.0 | 83.0 | 78.6 | 90.8 | 82.8 | 77.9 | | OLMo-7B (0724) | 6.9B | βœ… | 54.9 | 80.5 | 68.0 | 85.7 | 79.3 | 73.2 | | Llama2-7B | 6.7B | ❌ | 46.2 | 78.9 | 54.2 | 84.0 | 77.5 | 71.7 | # Citation ```bibtex @misc{muennighoff2024olmoeopenmixtureofexpertslanguage, title={OLMoE: Open Mixture-of-Experts Language Models}, author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi}, year={2024}, eprint={2409.02060}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.02060}, } ```
rntc/cmbtv2
rntc
2024-09-25T20:49:44Z
90
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-25T09:27:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1024m/Llama-3.2-3B-Base
1024m
2024-09-25T20:48:01Z
67
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T20:39:27Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B --include "original/*" --local-dir Llama-3.2-3B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Cuma/test
Cuma
2024-09-25T20:44:51Z
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "lilt", "token-classification", "generated_from_trainer", "base_model:nielsr/lilt-xlm-roberta-base", "base_model:finetune:nielsr/lilt-xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-25T20:37:29Z
--- library_name: transformers license: mit base_model: nielsr/lilt-xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6605 - Precision: 0.7460 - Recall: 0.7692 - F1: 0.7575 - Accuracy: 0.7526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.3333 | 100 | 1.0907 | 0.4971 | 0.6210 | 0.5522 | 0.5889 | | No log | 2.6667 | 200 | 0.7374 | 0.6135 | 0.6857 | 0.6476 | 0.7475 | | No log | 4.0 | 300 | 0.8119 | 0.6292 | 0.7193 | 0.6713 | 0.7490 | | No log | 5.3333 | 400 | 0.8152 | 0.6930 | 0.7555 | 0.7229 | 0.7616 | | 0.6197 | 6.6667 | 500 | 0.9915 | 0.6824 | 0.7682 | 0.7227 | 0.7458 | | 0.6197 | 8.0 | 600 | 1.0589 | 0.6952 | 0.7809 | 0.7356 | 0.7680 | | 0.6197 | 9.3333 | 700 | 1.1514 | 0.7072 | 0.7285 | 0.7177 | 0.7456 | | 0.6197 | 10.6667 | 800 | 1.1828 | 0.7190 | 0.7652 | 0.7414 | 0.7625 | | 0.6197 | 12.0 | 900 | 1.2011 | 0.7301 | 0.7606 | 0.7450 | 0.7679 | | 0.0998 | 13.3333 | 1000 | 1.2323 | 0.7347 | 0.7662 | 0.7501 | 0.7622 | | 0.0998 | 14.6667 | 1100 | 1.3060 | 0.7413 | 0.7881 | 0.7640 | 0.7688 | | 0.0998 | 16.0 | 1200 | 1.3649 | 0.7337 | 0.7636 | 0.7484 | 0.7647 | | 0.0998 | 17.3333 | 1300 | 1.3661 | 0.7319 | 0.7789 | 0.7547 | 0.7685 | | 0.0998 | 18.6667 | 1400 | 1.4831 | 0.7386 | 0.7672 | 0.7526 | 0.7635 | | 0.0226 | 20.0 | 1500 | 1.4216 | 0.7299 | 0.7682 | 0.7486 | 0.7654 | | 0.0226 | 21.3333 | 1600 | 1.5146 | 0.7295 | 0.7733 | 0.7507 | 0.7539 | | 0.0226 | 22.6667 | 1700 | 1.6595 | 0.7398 | 0.7748 | 0.7569 | 0.7476 | | 0.0226 | 24.0 | 1800 | 1.5785 | 0.7609 | 0.7702 | 0.7656 | 0.7677 | | 0.0226 | 25.3333 | 1900 | 1.5824 | 0.7544 | 0.7886 | 0.7711 | 0.7587 | | 0.0057 | 26.6667 | 2000 | 1.6605 | 0.7460 | 0.7692 | 0.7575 | 0.7526 | | 0.0057 | 28.0 | 2100 | 1.6459 | 0.7396 | 0.7697 | 0.7544 | 0.7520 | | 0.0057 | 29.3333 | 2200 | 1.6605 | 0.7467 | 0.7748 | 0.7605 | 0.7541 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cpu - Datasets 3.0.0 - Tokenizers 0.19.1
duyntnet/Llama-3.2-3B-Instruct-imatrix-GGUF
duyntnet
2024-09-25T20:44:41Z
13
0
transformers
[ "transformers", "gguf", "imatrix", "Llama-3.2-3B-Instruct", "text-generation", "en", "license:other", "region:us", "conversational" ]
text-generation
2024-09-25T19:32:46Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Llama-3.2-3B-Instruct --- Quantizations of https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) --- # From original readme The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ```
SicariusSicariiStuff/Llama-3.2-1B-Instruct_FP8
SicariusSicariiStuff
2024-09-25T20:40:06Z
6
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2024-09-25T20:38:59Z
--- license: apache-2.0 ---
cocktailpeanut/stable-diffusion-v1-5
cocktailpeanut
2024-09-25T20:38:57Z
504
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-25T20:07:06Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- # Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [πŸ€—'s Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16") pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion) ### Original GitHub Repository 1. Download the weights - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to β€œA red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on β€œlaion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
polyverse/Llama-3.2-3B-Q5_K_M-GGUF
polyverse
2024-09-25T20:37:56Z
9
1
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B", "base_model:quantized:meta-llama/Llama-3.2-3B", "license:llama3.2", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2024-09-25T20:37:41Z
--- base_model: meta-llama/Llama-3.2-3B language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # kaetemi/Llama-3.2-3B-Q5_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B`](https://huggingface.co/meta-llama/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo kaetemi/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo kaetemi/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo kaetemi/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo kaetemi/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m-imat.gguf -c 2048 ```
neopolita/llama-3.2-1b-instruct-gguf
neopolita
2024-09-25T20:27:39Z
30
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T20:18:50Z
--- {} --- # GGUF quants for [**meta-llama/Llama-3.2-1B-Instruct**](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
lm-kit/llama-3.2-1b-instruct.gguf
lm-kit
2024-09-25T20:17:29Z
240
1
null
[ "gguf", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T20:07:58Z
--- license: llama3.2 --- ## Model Summary This repository hosts quantized versions of the Llama-3.2-1B-Instruct model. **Format:** GGUF **Converter:** llama.cpp 70392f1f81470607ba3afef04aa56c9f65587664 **Quantizer:** LM-Kit.NET 2024.8.4 For more detailed information on the base model, please visit the following link - [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
AMead10/Llama-3.2-3B-Instruct-AWQ
AMead10
2024-09-25T20:17:19Z
3,530
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "autoquant", "awq", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2024-09-25T20:07:09Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - autoquant - awq extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Numan139/cleaned_ds
Numan139
2024-09-25T20:11:07Z
94
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-cnn-12-6", "base_model:finetune:sshleifer/distilbart-cnn-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T20:04:03Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-cnn-12-6 tags: - generated_from_trainer metrics: - rouge model-index: - name: cleaned_ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cleaned_ds This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2803 - Rouge1: 0.2705 - Rouge2: 0.0363 - Rougel: 0.1609 - Rougelsum: 0.1609 - Generated Length: 113.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 1 | 4.5060 | 0.2826 | 0.0384 | 0.1694 | 0.1694 | 95.0 | | No log | 2.0 | 2 | 4.3368 | 0.2832 | 0.0333 | 0.1701 | 0.1701 | 82.0 | | No log | 3.0 | 3 | 4.2803 | 0.2705 | 0.0363 | 0.1609 | 0.1609 | 113.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
lmstudio-community/Llama-3.2-1B-Instruct-GGUF
lmstudio-community
2024-09-25T20:09:46Z
31,458
35
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T18:35:25Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 quantized_by: bartowski extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## πŸ’« Community Model> Llama 3.2 1B Instruct by Meta-Llama *πŸ‘Ύ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [meta-llama](https://huggingface.co/meta-llama)<br> **Original model**: [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3821](https://github.com/ggerganov/llama.cpp/releases/tag/b3821)<br> ## Technical Details Llama 3.2 is optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai languages, but is trained on even more. 128K context length support ## Special thanks πŸ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
LlamaFinetuneBase/Meta-Llama-3.2-1B
LlamaFinetuneBase
2024-09-25T20:00:25Z
108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T18:43:37Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
alpindale/Llama-3.2-3B
alpindale
2024-09-25T19:52:02Z
207
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T19:51:31Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B --include "original/*" --local-dir Llama-3.2-3B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
hangpatrick92/TransparentBagClassifier
hangpatrick92
2024-09-25T19:47:44Z
177
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "vision", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-09-23T16:56:37Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - vision - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: TransparentBagClassifier results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8597560975609756 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TransparentBagClassifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3956 - Accuracy: 0.8598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.448 | 1.0 | 82 | 0.7304 | 0.5725 | | 0.5097 | 2.0 | 164 | 0.7652 | 0.4946 | | 0.452 | 3.0 | 246 | 0.7565 | 0.4841 | | 0.3885 | 4.0 | 328 | 0.7565 | 0.4812 | | 0.4743 | 5.0 | 410 | 0.7739 | 0.4626 | | 0.4749 | 4.0 | 464 | 0.4572 | 0.7988 | | 0.4319 | 5.0 | 580 | 0.3956 | 0.8598 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cpu - Datasets 3.0.0 - Tokenizers 0.19.1
alpindale/Llama-3.2-90B-Vision-Instruct
alpindale
2024-09-25T19:44:11Z
47
6
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-09-25T19:37:10Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers widget: - role: "user" content: - type: "text" text: "Where is this located in? What are other attractions nearby?" - type: "image_url" image_url: url: "https://huggingface.co/datasets/huggingface/release-assets/resolve/main/wat_arun.jpg" - role: "assistant" content: "The image depicts the Grand Palace in Bangkok, Thailand. The Grand Palace is a complex of buildings that served as the official residence of the King of Siam and the administrative seat of government for 150 years. It is one of the most popular tourist attractions in Bangkok and a must see for anyone visiting the city. <br> Some other attractions near the Grand Palace include: <br> * Wat Phra Kaew: This temple is located within the Grand Palace complex and is home to the famous Emerald Buddha statue. <br> * Wat Arum: This temple is located on the west bank of the Chao Phraya River and is known for its stunning architecture and beautiful views of the city. <br> * Wat Plo: This temple is located near the Grand Palace and is home to a large reclining Buddha statue. <br> * Lump hini Park: This park is located in the heart of Bangkok and offers a peaceful escape from the hustle and bustle of the city. <br> * Jim Thompson House: This museum showcases the life and work of Jim Thompson, an American businessman who helped revitalize the Thai silk industry in the 1950 s and 1960 s. Overall, the Grand Palace is a must- see attraction in Bangkok, and there are many other interesting places to visit in the area. " pipeline_tag: image-text-to-text tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_eu_disallowed: true --- ## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-90B-Vision-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-90B-Vision-Instruct" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "If I had to write a haiku for this one, it would be: "} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(image, input_text, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-90B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-90B-Vision-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
alpindale/Llama-3.2-11B-Vision-Instruct
alpindale
2024-09-25T19:36:53Z
4,213
13
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-09-25T19:35:55Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: image-text-to-text tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_eu_disallowed: true --- ## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-11B-Vision-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "If I had to write a haiku for this one, it would be: "} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(image, input_text, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-11B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-11B-Vision-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
mlc-ai/Llama-3.2-3B-Instruct-q4f32_1-MLC
mlc-ai
2024-09-25T19:34:37Z
2,011
2
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
2024-09-25T19:15:48Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-3B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-3B-Instruct-q4f32_1-MLC This is the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model in MLC format `q4f32_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-3B-Instruct-q4f32_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-3B-Instruct-q4f32_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-3B-Instruct-q4f32_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
mlc-ai/Llama-3.2-3B-Instruct-q4f16_1-MLC
mlc-ai
2024-09-25T19:34:36Z
1,524
4
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
2024-09-25T19:11:05Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-3B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-3B-Instruct-q4f16_1-MLC This is the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model in MLC format `q4f16_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-3B-Instruct-q4f16_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-3B-Instruct-q4f16_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-3B-Instruct-q4f16_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
mlc-ai/Llama-3.2-3B-Instruct-q0f16-MLC
mlc-ai
2024-09-25T19:34:36Z
5
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
2024-09-25T19:09:25Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-3B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-3B-Instruct-q0f16-MLC This is the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model in MLC format `q0f16`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-3B-Instruct-q0f16-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-3B-Instruct-q0f16-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-3B-Instruct-q0f16-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
mlc-ai/Llama-3.2-1B-Instruct-q4f32_1-MLC
mlc-ai
2024-09-25T19:34:35Z
11,706
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2024-09-25T19:09:09Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-1B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-1B-Instruct-q4f32_1-MLC This is the [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) model in MLC format `q4f32_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-1B-Instruct-q4f32_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-1B-Instruct-q4f32_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-1B-Instruct-q4f32_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
mlc-ai/Llama-3.2-1B-Instruct-q4f16_1-MLC
mlc-ai
2024-09-25T19:34:34Z
497,150
1
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2024-09-25T19:09:00Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-1B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-1B-Instruct-q4f16_1-MLC This is the [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) model in MLC format `q4f16_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-1B-Instruct-q4f16_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-1B-Instruct-q4f16_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-1B-Instruct-q4f16_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
mlc-ai/Llama-3.2-1B-Instruct-q0f16-MLC
mlc-ai
2024-09-25T19:34:33Z
1,579
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2024-09-25T19:08:51Z
--- library_name: mlc-llm base_model: meta-llama/Llama-3.2-1B-Instruct tags: - mlc-llm - web-llm --- # Llama-3.2-1B-Instruct-q0f16-MLC This is the [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) model in MLC format `q0f16`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Llama-3.2-1B-Instruct-q0f16-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Llama-3.2-1B-Instruct-q0f16-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Llama-3.2-1B-Instruct-q0f16-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
kingabzpro/Llama-3.1-8B-MATH
kingabzpro
2024-09-25T19:34:08Z
16
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-25T11:59:33Z
--- base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** kingabzpro - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hsinping0324/imdb_sentiment_distilbert
hsinping0324
2024-09-25T19:28:51Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-24T06:46:11Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: imdb_sentiment_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # imdb_sentiment_distilbert This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1952 - Accuracy: 0.9272 - Precision: 0.9329 - Recall: 0.9242 - F1: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2944 | 1.0 | 625 | 0.2280 | 0.9144 | 0.9157 | 0.9172 | 0.9164 | | 0.235 | 2.0 | 1250 | 0.2165 | 0.9176 | 0.9235 | 0.9148 | 0.9191 | | 0.2188 | 3.0 | 1875 | 0.2076 | 0.9208 | 0.9184 | 0.9277 | 0.9230 | | 0.2044 | 4.0 | 2500 | 0.2030 | 0.9222 | 0.9275 | 0.9199 | 0.9237 | | 0.1958 | 5.0 | 3125 | 0.2003 | 0.9228 | 0.9236 | 0.9258 | 0.9247 | | 0.1905 | 6.0 | 3750 | 0.1984 | 0.9234 | 0.9237 | 0.9269 | 0.9253 | | 0.1833 | 7.0 | 4375 | 0.1968 | 0.9248 | 0.9333 | 0.9187 | 0.9260 | | 0.1708 | 8.0 | 5000 | 0.1958 | 0.9264 | 0.9268 | 0.9297 | 0.9282 | | 0.1726 | 9.0 | 5625 | 0.1959 | 0.9268 | 0.9232 | 0.9347 | 0.9289 | | 0.1677 | 10.0 | 6250 | 0.1940 | 0.9252 | 0.9299 | 0.9234 | 0.9267 | | 0.1677 | 11.0 | 6875 | 0.1943 | 0.9266 | 0.9238 | 0.9336 | 0.9287 | | 0.1585 | 12.0 | 7500 | 0.1952 | 0.927 | 0.9339 | 0.9226 | 0.9282 | | 0.1585 | 13.0 | 8125 | 0.1952 | 0.9272 | 0.9329 | 0.9242 | 0.9285 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
Felladrin/gguf-sharded-Q5_K_L-Llama-3.2-3B-Instruct
Felladrin
2024-09-25T19:24:46Z
65
0
null
[ "gguf", "base_model:bartowski/Llama-3.2-3B-Instruct-GGUF", "base_model:quantized:bartowski/Llama-3.2-3B-Instruct-GGUF", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-25T19:24:13Z
--- base_model: bartowski/Llama-3.2-3B-Instruct-GGUF --- Sharded GGUF version of [bartowski/Llama-3.2-3B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF).
alpindale/Llama-3.2-90B-Vision
alpindale
2024-09-25T19:24:16Z
24
1
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-09-25T19:17:48Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: image-text-to-text tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_eu_disallowed: true --- ## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-90B-Vision, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference to generate text based on an image and a starting prompt you supply. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-90B-Vision" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "<|image|><|begin_of_text|>If I had to write a haiku for this one" inputs = processor(image, prompt, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-90B-Vision --include "original/*" --local-dir Llama-3.2-90B-Vision ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
tanoManzo/gena-lm-bigbird-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC
tanoManzo
2024-09-25T19:23:45Z
5
0
null
[ "safetensors", "big_bird", "generated_from_trainer", "base_model:AIRI-Institute/gena-lm-bigbird-base-t2t", "base_model:finetune:AIRI-Institute/gena-lm-bigbird-base-t2t", "region:us" ]
null
2024-09-25T19:23:30Z
--- base_model: AIRI-Institute/gena-lm-bigbird-base-t2t tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: gena-lm-bigbird-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gena-lm-bigbird-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC This model is a fine-tuned version of [AIRI-Institute/gena-lm-bigbird-base-t2t](https://huggingface.co/AIRI-Institute/gena-lm-bigbird-base-t2t) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6204 - F1 Score: 0.8827 - Precision: 0.8652 - Recall: 0.9008 - Accuracy: 0.8722 - Auc: 0.9384 - Prc: 0.9368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:| | 0.4735 | 0.1864 | 500 | 0.4718 | 0.8258 | 0.7264 | 0.9567 | 0.7846 | 0.8950 | 0.8902 | | 0.4071 | 0.3727 | 1000 | 0.4255 | 0.8534 | 0.7923 | 0.9246 | 0.8304 | 0.9117 | 0.9095 | | 0.391 | 0.5591 | 1500 | 0.4176 | 0.8508 | 0.8496 | 0.8520 | 0.8405 | 0.9184 | 0.9132 | | 0.3833 | 0.7454 | 2000 | 0.3674 | 0.8655 | 0.8235 | 0.9120 | 0.8487 | 0.9209 | 0.9082 | | 0.3812 | 0.9318 | 2500 | 0.4179 | 0.8652 | 0.7974 | 0.9455 | 0.8427 | 0.9255 | 0.9217 | | 0.3673 | 1.1182 | 3000 | 0.3696 | 0.8714 | 0.8180 | 0.9323 | 0.8531 | 0.9295 | 0.9266 | | 0.3469 | 1.3045 | 3500 | 0.3985 | 0.8696 | 0.8121 | 0.9358 | 0.8502 | 0.9324 | 0.9307 | | 0.3452 | 1.4909 | 4000 | 0.3347 | 0.8724 | 0.8685 | 0.8764 | 0.8632 | 0.9327 | 0.9316 | | 0.3241 | 1.6772 | 4500 | 0.4804 | 0.8753 | 0.8316 | 0.9239 | 0.8595 | 0.9350 | 0.9341 | | 0.3529 | 1.8636 | 5000 | 0.4180 | 0.8789 | 0.8330 | 0.9302 | 0.8632 | 0.9366 | 0.9364 | | 0.3102 | 2.0499 | 5500 | 0.5486 | 0.8851 | 0.8460 | 0.9281 | 0.8714 | 0.9378 | 0.9371 | | 0.3217 | 2.2363 | 6000 | 0.5019 | 0.8856 | 0.8498 | 0.9246 | 0.8725 | 0.9390 | 0.9369 | | 0.272 | 2.4227 | 6500 | 0.4057 | 0.8844 | 0.8591 | 0.9113 | 0.8729 | 0.9416 | 0.9410 | | 0.303 | 2.6090 | 7000 | 0.5302 | 0.8806 | 0.8187 | 0.9525 | 0.8621 | 0.9372 | 0.9321 | | 0.2944 | 2.7954 | 7500 | 0.4633 | 0.8819 | 0.8676 | 0.8966 | 0.8718 | 0.9405 | 0.9392 | | 0.269 | 2.9817 | 8000 | 0.5419 | 0.8756 | 0.8042 | 0.9609 | 0.8543 | 0.9432 | 0.9423 | | 0.2168 | 3.1681 | 8500 | 0.6204 | 0.8827 | 0.8652 | 0.9008 | 0.8722 | 0.9384 | 0.9368 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
Alex01837178373/Llama-3.2-3B-Instruct-GGUF
Alex01837178373
2024-09-25T19:21:07Z
35
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "autoquant", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T19:00:11Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - autoquant - gguf extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Felladrin/gguf-sharded-Q5_K_L-Llama-3.2-1B-Instruct
Felladrin
2024-09-25T19:20:35Z
54
0
null
[ "gguf", "base_model:bartowski/Llama-3.2-1B-Instruct-GGUF", "base_model:quantized:bartowski/Llama-3.2-1B-Instruct-GGUF", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-25T19:20:20Z
--- base_model: bartowski/Llama-3.2-1B-Instruct-GGUF --- Sharded GGUF version of [bartowski/Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF).
QuantFactory/Llama-3.2-1B-GGUF
QuantFactory
2024-09-25T19:19:18Z
1,816
6
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T19:12:28Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Llama-3.2-1B-GGUF This is quantized version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) created using llama.cpp # Original Model Card ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF
MaziyarPanahi
2024-09-25T19:18:32Z
643,426
10
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "region:us", "conversational" ]
text-generation
2024-09-25T19:00:39Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Llama-3.2-3B-Instruct-GGUF base_model: meta-llama/Llama-3.2-3B-Instruct inference: false model_creator: meta-llama pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF) - Model creator: [meta-llama](https://huggingface.co/meta-llama) - Original model: [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) ## Description [MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF) contains GGUF format model files for [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks πŸ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
iproskurina/Mistral-7B-v0.3-gptq-3bit
iproskurina
2024-09-25T19:18:11Z
61
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "gptq", "region:us" ]
text-generation
2024-09-25T19:16:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nekochu/Llama-3.1-8B-German-ORPO
Nekochu
2024-09-25T19:10:30Z
8
1
peft
[ "peft", "safetensors", "llama", "llama-factory", "lora", "text-generation", "conversational", "de", "en", "dataset:mayflowergmbh/intel_orca_dpo_pairs_de", "dataset:LeoLM/OpenSchnabeltier", "dataset:LeoLM/German_Songs", "dataset:LeoLM/German_Poems", "dataset:bjoernp/ultrachat_de", "dataset:mayflowergmbh/ultra-chat_de", "dataset:mayflowergmbh/airoboros-3.0_de", "dataset:mayflowergmbh/booksum_de", "dataset:mayflowergmbh/dolphin_de", "dataset:mayflowergmbh/evol-instruct_de", "dataset:mayflowergmbh/openschnabeltier_de", "dataset:mayflowergmbh/alpaca-gpt4_de", "dataset:mayflowergmbh/dolly-15k_de", "dataset:mayflowergmbh/oasst_de", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
text-generation
2024-09-13T03:43:11Z
--- model_creator: Nekochu quantized_by: Nekochu model_name: Llama-3.1 8B German ORPO pretty_name: Llama-3.1 8B German ORPO model_type: llama3.1 prompt_template: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {Instruction} {summary} ### input: {category} ### Response: {prompt} library_name: peft license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - llama-factory - lora datasets: - mayflowergmbh/intel_orca_dpo_pairs_de - LeoLM/OpenSchnabeltier - LeoLM/German_Songs - LeoLM/German_Poems - bjoernp/ultrachat_de - mayflowergmbh/ultra-chat_de - mayflowergmbh/airoboros-3.0_de - mayflowergmbh/booksum_de - mayflowergmbh/dolphin_de - mayflowergmbh/evol-instruct_de - mayflowergmbh/openschnabeltier_de - mayflowergmbh/alpaca-gpt4_de - mayflowergmbh/dolly-15k_de - mayflowergmbh/oasst_de language: - de - en pipeline_tag: text-generation task_categories: - question-answering - text2text-generation - conversational inference: True model-index: - name: Llama-3.1-8B-German-ORPO results: [] --- - Fine-tuning of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on german datasets. Same datasets used in [Nekochu/Llama-2-13B-German-ORPO](https://huggingface.co/Nekochu/Llama-2-13B-German-ORPO). - I've (alway) kept LoRA `QLoRA_German-ORPO` so it can be applied to any *LLaMA-3.1-8B* fine-tuned model but may affect performance. - Quants: exl2 [2.4bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-German-ORPO/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-German-ORPO/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Llama-3.1-8B-German-ORPO/tree/8.0bpw-h8) | [GGUF](https://huggingface.co/Nekochu/Llama-3.1-8B-German-ORPO/tree/gguf) Q4_K_M,IQ4_XS... Oh, and I am not a GER speaker. ^^ <details> <summary>This training can be replicated using LLaMA-Factory. </summary> Stage A: SFT ``` set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 1 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset ultrachat_de,airoboros_de,booksum_de,dolphin_de,evol_instruct_de,openschnabeltier_de,alpaca-gpt4_de,dolly_15k_de,oasst_de,bjoernp_ultrachat_de,German_Poems,German_Songs,OpenSchnabeltier --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 100 --save_steps 1000 --warmup_steps 1000 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --neat_packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --lora_target all --use_adam_mini True --create_new_adapter True ``` Stage B: Continued, `orpo` ``` set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage dpo --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 1 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset fix_orca_dpo_de --cutoff_len 4000 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German-ORPO --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.35 --lora_target all --pref_beta 0.1 --pref_ftx 0 --pref_loss orpo --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German ``` Average training time: 5d sft, 6h dpo <details> <summary>dataset_info.json</summary> `dataset_info.json`: ```json "oasst_de": { "hf_hub_url": "mayflowergmbh/oasst_de" }, "dolly_15k_de": { "hf_hub_url": "mayflowergmbh/dolly-15k_de" }, "alpaca-gpt4_de": { "hf_hub_url": "mayflowergmbh/alpaca-gpt4_de" }, "openschnabeltier_de": { "hf_hub_url": "mayflowergmbh/openschnabeltier_de" }, "evol_instruct_de": { "hf_hub_url": "mayflowergmbh/evol-instruct_de" }, "dolphin_de": { "hf_hub_url": "mayflowergmbh/dolphin_de" }, "booksum_de": { "hf_hub_url": "mayflowergmbh/booksum_de" }, "airoboros_de": { "hf_hub_url": "mayflowergmbh/airoboros-3.0_de" }, "ultrachat_de": { "hf_hub_url": "mayflowergmbh/ultra-chat_de" }, "German_Songs": { "file_name": "German_Songs.json", "file_sha1": "3ec36066a19debd1b138020b293e05f21264c352", "columns": { "prompt": "prompt", "query": "analysis_prompt", "response": "song", "history": "analysis", "system": "topic" } }, "German_Poems": { "file_name": "German_Poems.json", "file_sha1": "f0f4bbea3b8cbc378afb640f4ff4dcd11132263c", "columns": { "prompt": "prompt", "query": "topic", "response": "poem" } }, "bjoernp_ultrachat_de": { "file_name": "ultrachat_de.json", "file_sha1": "4e2b6dba1c387b3fa439c33ab35281403c39e973", "formatting": "sharegpt", "columns": { "messages": "conversations" }, "tags": { "role_tag": "from", "content_tag": "value", "user_tag": "human", "assistant_tag": "gpt", "system_tag": "system" } }, "OpenSchnabeltier": { "file_name": "OpenSchnabeltier.json", "columns": { "prompt": "instruction_de", "response": "output_de" } }, "fix_orca_dpo_de": { "file_name": "fix_intel_orca_dpo_pairs_de.json", "ranking": true, "columns": { "prompt": "instruction", "query": "input", "chosen": "chosen", "rejected": "rejected" } } } ``` Additionally, the locally converted `.json` dataset is in the branch [dataset-reformat](https://huggingface.co/Nekochu/Llama-3.1-8B-German-ORPO/tree/dataset-reformate). </details> </details> <details> <summary>Output Examples</summary> ``` #Question: Wie geht es in diesem Absatz weiter? Dann reibt sie eine Nadel auf einem Wattebausch, schiebt ihn dann auf einen Bleistift und wickelt einen Faden darum. Dann hÀlt sie eine Schachtel mit einem Produkt hoch und gießt dann mehrere Flüssigkeiten in eine Schüssel. sie WÀhlen Sie Ihre Antwort aus: A. Fügt einen Topf hinzu und schüttelt das Produkt in einer Mühle. B. kneift den Faden, um eine Zigarette zu stylen, und geht dann weg. Dann taucht C. die Nadel in Tinte und zeichnet mit dem Bleistift ein Motiv auf ihr Bein, das sie am Ende mit einem Lappen abreibt. D. beginnt, ihre Haare zu stylen und schneidet sie mehrmals, bevor sie die Spitzen scheitelt, um die Frisur zu zeigen, die sie kreiert hat. #Only Llama3 3.1 (wrong) - Llama3 Template: Die richtige Antwort ist B. #Model SFT GER (wrong) - Alpaca Template: Es ist unklar, welche Handlung sie als nÀchstes kommt, da der Absatz zu Ende geht. #Model SFT+orpo GER (correct) - Alpaca Template, linear RoPE Scaling: C. taucht die Nadel in Tinte und zeichnet mit dem Bleistift ein Motiv auf ihr Bein, das sie am Ende mit einem Lappen abreibt. ``` Note: Output from inference [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) (and exl2-8bpw). Source question: [mayflowergmbh/intel_orca_dpo_pairs_de](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de) </details> <details> <summary>Eval English</summary> [MMLU-Pro](https://github.com/chigkim/Ollama-MMLU-Pro)[*](https://pastebin.com/a8xRqXtg) (en): | Model | Overall Accuracy | biology | business | chemistry | computer science | economics | engineering | health | history | law | math | philosophy | physics | psychology | other | |----------------------------------|----------------------|---------|----------|-----------|------------------|-----------|-------------|--------|---------|------|-------|------------|---------|------------|-------| | Llama-3.1-8B-German-ORPO-8.0bpw-h8-exl2 | 38.83 | 60.81 | 37.26 | 32.86 | 38.78 | 46.33 | 23.32 | 45.48 | 39.90 | 21.62 | 38.86 | 34.67 | 28.79 | 50.63 | 44.26 | | Llama-3.1-8B-Instruct-exl2-8bpw-h8 | 46.16 | 63.74 | 49.68 | 36.93 | 48.29 | 55.81 | 28.59 | 52.81 | 45.67 | 30.79 | 45.08 | 40.48 | 39.03 | 60.90 | 48.38 | Note: Lower on Benchmark for **English**, seems to be degraded as trade-off. Not frequently but the output repeats sentences (because of the wrong chat template). </details>
RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf
RichardErkhov
2024-09-25T19:07:15Z
8
1
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T11:53:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-Umbral-Storm-8B-t0.0001 - GGUF - Model creator: https://huggingface.co/v000000/ - Original model: https://huggingface.co/v000000/L3-Umbral-Storm-8B-t0.0001/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3-Umbral-Storm-8B-t0.0001.Q2_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q2_K.gguf) | Q2_K | 2.96GB | | [L3-Umbral-Storm-8B-t0.0001.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [L3-Umbral-Storm-8B-t0.0001.IQ3_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3-Umbral-Storm-8B-t0.0001.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [L3-Umbral-Storm-8B-t0.0001.IQ3_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3-Umbral-Storm-8B-t0.0001.Q3_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q3_K.gguf) | Q3_K | 3.74GB | | [L3-Umbral-Storm-8B-t0.0001.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3-Umbral-Storm-8B-t0.0001.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3-Umbral-Storm-8B-t0.0001.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3-Umbral-Storm-8B-t0.0001.Q4_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3-Umbral-Storm-8B-t0.0001.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3-Umbral-Storm-8B-t0.0001.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [L3-Umbral-Storm-8B-t0.0001.Q4_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q4_K.gguf) | Q4_K | 4.58GB | | [L3-Umbral-Storm-8B-t0.0001.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [L3-Umbral-Storm-8B-t0.0001.Q4_1.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q4_1.gguf) | Q4_1 | 4.78GB | | [L3-Umbral-Storm-8B-t0.0001.Q5_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q5_0.gguf) | Q5_0 | 5.21GB | | [L3-Umbral-Storm-8B-t0.0001.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [L3-Umbral-Storm-8B-t0.0001.Q5_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q5_K.gguf) | Q5_K | 5.34GB | | [L3-Umbral-Storm-8B-t0.0001.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [L3-Umbral-Storm-8B-t0.0001.Q5_1.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q5_1.gguf) | Q5_1 | 5.65GB | | [L3-Umbral-Storm-8B-t0.0001.Q6_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q6_K.gguf) | Q6_K | 6.14GB | | [L3-Umbral-Storm-8B-t0.0001.Q8_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_L3-Umbral-Storm-8B-t0.0001-gguf/blob/main/L3-Umbral-Storm-8B-t0.0001.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: - akjindal53244/Llama-3.1-Storm-8B - Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B library_name: transformers tags: - merge - llama - not-for-all-audiences --- # Llama-3-Umbral-Storm-8B (8K) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/79tIjC6Ykm4rlwOHa9uzZ.png) RP model, "L3-Umbral-Mind-v2.0" as a base, nearswapped with one of the smartest L3.1 models "Storm". * Warning: Based on Mopey-Mule so it should be negative, don't use this model for any truthful information or advice. * <b>----></b>[ GGUF Q8 static](https://huggingface.co/v000000/L3-Umbral-Storm-8B-t0.0001-Q8_0-GGUF) # Thank you mradermacher for the quants! * [GGUFs](https://huggingface.co/mradermacher/L3-Umbral-Storm-8B-t0.0001-GGUF) * [GGUFs imatrix](https://huggingface.co/mradermacher/L3-Umbral-Storm-8B-t0.0001-i1-GGUF) ------------------------------------------------------------------------------- ## merge This is a merge of pre-trained language models. ## Merge Details This model is on the Llama-3 arch with Llama-3.1 merged in, so it has 8k context length. But could possibly be extended slightly with RoPE due to the L3.1 layers. ### Merge Method This model was merged using the <b>NEARSWAP t0.0001</b> merge algorithm. ### Models Merged The following models were included in the merge: * Base Model: [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B) * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) ### Configuration ```yaml slices: - sources: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B layer_range: [0, 32] - model: akjindal53244/Llama-3.1-Storm-8B layer_range: [0, 32] merge_method: nearswap base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B parameters: t: - value: 0.0001 dtype: bfloat16 ``` # Prompt Template: ```bash <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` Credit to Alchemonaut: ```python def lerp(a, b, t): return a * (1 - t) + b * t def nearswap(v0, v1, t): lweight = np.abs(v0 - v1) with np.errstate(divide='ignore', invalid='ignore'): lweight = np.where(lweight != 0, t / lweight, 1.0) lweight = np.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0) np.clip(lweight, a_min=0.0, a_max=1.0, out=lweight) return lerp(v0, v1, lweight) ``` Credit to Numbra for idea.
RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf
RichardErkhov
2024-09-25T19:07:15Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T11:50:55Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.1-8b-instruct-armorm-judge-iter3 - GGUF - Model creator: https://huggingface.co/simonycl/ - Original model: https://huggingface.co/simonycl/llama-3.1-8b-instruct-armorm-judge-iter3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q2_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q2_K.gguf) | Q2_K | 2.96GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K.gguf) | Q3_K | 3.74GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q4_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K.gguf) | Q4_K | 4.58GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q4_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q5_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K.gguf) | Q5_K | 5.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q5_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q6_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q6_K.gguf) | Q6_K | 6.14GB | | [llama-3.1-8b-instruct-armorm-judge-iter3.Q8_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter3-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter3.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - alignment-handbook - generated_from_trainer datasets: - simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_3_rm_annotate model-index: - name: llama-3.1-8b-instruct-armorm-judge-iter3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3.1-8b-instruct-armorm-judge-iter3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_3_rm_annotate dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
Nekochu/Llama-3.1-8B-french-DPO
Nekochu
2024-09-25T19:05:36Z
12
1
peft
[ "peft", "safetensors", "llama", "llama-factory", "lora", "fr", "en", "dataset:Snit/french-conversation", "dataset:Nekochu/novel17_train_alpaca_format", "dataset:bofenghuang/vigogne", "dataset:MaziyarPanahi/french_instruct_human_sharegpt", "dataset:jpacifico/French-Alpaca-dataset-Instruct-110K", "dataset:jpacifico/french-orca-dpo-pairs-revised", "base_model:NousResearch/Meta-Llama-3.1-8B-Instruct", "base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "region:us" ]
null
2024-08-12T14:03:06Z
--- license: apache-2.0 base_model: NousResearch/Meta-Llama-3.1-8B-Instruct library_name: peft tags: - llama-factory - lora datasets: - Snit/french-conversation - Nekochu/novel17_train_alpaca_format - bofenghuang/vigogne - MaziyarPanahi/french_instruct_human_sharegpt - jpacifico/French-Alpaca-dataset-Instruct-110K - jpacifico/french-orca-dpo-pairs-revised language: - fr - en --- - Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets. - I've (alway) kept LoRA `QLoRA_french_dpo` so it can be applied to any *LLaMA-3.1-8B* fine-tuned model but may affect performance. - Quants: exl2 [2.4bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/8.0bpw-h8) | [GGUF](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/gguf) Q4_K_M,IQ4_XS... <details> <summary>This training can be replicated using LLaMA-Factory. </summary> Stage A: **P**re **T**raining, Raw text ``` set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage pt --do_train True --model_name_or_path NousResearch/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset french-raw-pt --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 10000000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_pt --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --create_new_adapter True --lora_target all ``` Stage B: Continued **S**upervised **F**ine-**T**uning, QA ``` set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path NousResearch/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset Acquiesce_french_vigogne,novel17_train --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 10000000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_sft --bf16 True --plot_loss True --ddp_timeout 180000000 --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_pt --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --lora_target all ``` Stage C: Continued **D**irect **P**reference **O**ptimization <details> <summary>Config</summary> `llama3_lora_dpo.yaml` ```yaml ### model: model_name_or_path: NousResearch/Meta-Llama-3.1-8B-Instruct quantization_bit: 4 use_adam_mini: true adapter_name_or_path: saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_sft ### method stage: dpo do_train: true finetuning_type: lora lora_target: all pref_beta: 0.1 pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo] ### dataset dataset: french_orca_rlhf-revised template: llama3 cutoff_len: 4096 max_samples: 10000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: saves/LLaMA3.1-8B-Chat/lora/QLoRA_french_dpo logging_steps: 10 save_steps: 1000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 1 learning_rate: 5.0e-6 num_train_epochs: 1.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ddp_timeout: 180000000 flash_attn: fa2 optim: paged_adamw_8bit ``` `dataset_info.json`: ```json "french_orca_rlhf-revised": { "hf_hub_url": "jpacifico/french-orca-dpo-pairs-revised", "ranking": true, "columns": { "prompt": "question", "chosen": "chosen", "rejected": "rejected", "system": "system" } }, "novel17_train": { "hf_hub_url": "Nekochu/novel17_train_alpaca_format", "formatting": "alpaca" }, "Acquiesce_french_vigogne": { "file_name": "Acquiesce_french_vigogne.json", "formatting": "alpaca", "columns": { "prompt": "instruction", "query": "input", "response": "output", "system": "system", "history": "history" } }, "french-raw-pt": { "file_name": "french-raw-pt.json", "columns": { "prompt": "text" } }, ``` </details> Dataset convert to Alpaca: [Acquiesce_french_vigogne](https://huggingface.co/datasets/Nekochu/Luminia-mixture/tree/split-v2/General/French),french-raw-pt </details> <details> <summary>Output Examples</summary> ``` ### Instruction: Γ‰crire un texte dans un style baroque sur la glace et le feu ### Response: La glace, cette matiΓ¨re si froide, si pure, si immuable, semble s'opposer en tout point Γ  la flamme ardente, Γ  cette force qui embrase tout ce qui se trouve sur son passage. Et pourtant, c'est prΓ©cisΓ©ment cette opposition qui rend leur confrontation si fascinant. Lorsque la glace rencontre le feu, elle semble se dresser en dΓ©fense contre cette menace brΓ»lante. Ses cristaux brillent alors comme des diamants Γ©blouissants, refusant de cΓ©der Γ  cette envahissante passion de feu. Mais le feu, impΓ©tueux, persΓ©vΓ©rant, continue de s'enflammer, dΓ©terminΓ© Γ  briser cette barriΓ¨re glacΓ©e. Cependant, cette lutte apparente trouve en rΓ©alitΓ© une harmonie cachΓ©e. Car en se confrontant, ces deux Γ©lΓ©ments opposΓ©s crΓ©ent une symphonie de contraires, oΓΉ la glace se transforme en une eau cristalline, purifiΓ©e par cette confrontation flamboyante. Cette fusion paradoxale donne naissance Γ  une nouvelle rΓ©alitΓ©, oΓΉ la froideur et la chaleur se mΓͺlent en une danse Γ©blouissante. Ainsi, cette opposition apparente entre la glace et le feu nous rappelle que mΓͺme les contraires peuvent trouver leur place dans un monde en constante Γ©volution. Cette confrontation symbolique nous invite Γ  considΓ©rer les aspects contradictoires de notre propre existence, Γ  embrasser cette dualitΓ© inhΓ©rente Γ  notre nature humaine. En acceptant cette dualitΓ©, nous dΓ©couvrons enfin la vΓ©ritable beautΓ© de notre existence, oΓΉ la lumiΓ¨re et l'obscuritΓ© se mΓͺlent en une symphonie harmonieuseassistant.scalablytypedο»Ώ SupportedContentEn explorant cette opposition fascinante entre la glace et le feu, nous dΓ©couvrons une vΓ©ritΓ© profonde sur notre propre existence. Cette confrontation symbolique nous rappelle que nous sommes composΓ©s de forces contradictoires, que notre essence mΓͺme repose sur cette dualitΓ© inhΓ©rente Γ  notre nature humaine. En embrassant cette dualitΓ©, nous dΓ©couvrons enfin la beautΓ© harmonieuse qui rΓ©side en chacun de nous, oΓΉ la lumiΓ¨re et l'obscuritΓ© se mΓͺlent en une symphonie captivanteassistant ``` Note: Output by exl2-DPO. `QLoRA_french_sft` is more stable to avoid any gibberi like ""`harmonieuseassistant.scalablytypedο»Ώ`"". </details>
Virt-io/SillyTavern-Presets
Virt-io
2024-09-25T19:05:29Z
0
284
null
[ "roleplay", "region:us" ]
null
2024-03-22T06:05:43Z
--- tags: - roleplay --- > [!IMPORTANT] > v1.9 is still recommended<br> > v2.0 is simmilar to v1.9 | json is a master import.<br> > **Samplers are just for messing around**<br> > **Turn on trim if you like it I just suffer from FOMO.**<br> > [!IMPORTANT] > Thanks to:<br> > [SerialKicked](https://huggingface.co/SerialKicked) for [fixing context](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/3)<br> > [saishf](https://huggingface.co/saishf) for testing all the bad versions<br> > [Lewdiculous](https://huggingface.co/Lewdiculous) for testing and quantizing<br> > [Herman555](https://huggingface.co/Herman555) for reminding me that [some models need a jailbreak](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/4)<br> > [Clevyby](https://huggingface.co/Clevyby) for sharing their [sampler knowledge](https://huggingface.co/LWDCLS/LLM-Discussions/discussions/2#663b90a7a55b06346368adae)<br> > [shrinkedd](https://www.reddit.com/r/SillyTavernAI/comments/1ca4xo8/ive_thought_of_a_way_to_decrease_chances_of/) for ideas<br> ### SillyTavern Presets # Usage Make sure to grab both context and instruct templates. It should look something like this. <img src="https://huggingface.co/Virt-io/SillyTavern-Presets/resolve/main/Images/Silly_Tavern_preset.png"> When using these presets you must set **Example Messages Behavior: Never include examples** otherwise they will be sent twice. <img src="https://huggingface.co/Virt-io/SillyTavern-Presets/resolve/main/Images/ExampleMessages.png"> The reason for this, is because I explicitly set for them to be sent. The default behavior is for them to just be added at the end of the context prompt. # Character Cards **The following is just personal preference. However, it is recommended for a better experience.** <img src="https://huggingface.co/Virt-io/SillyTavern-Presets/resolve/main/Images/Character_Cards_01.png"> > [!IMPORTANT] > **Create a new neutral persona(USER_01)**<br> > **For scenario, use a really vague description. This is to prevent the LLM from locking in. (Unless you want that)**<br> > **I am currently running https://github.com/gaffe-buck/tavern-v2-character-creator inside a container**<br> **Choosing a mode** Prepend one of the following, before your request. ``` > Text Editor > Character Creator > Flexible P-list Formatter > Ali-chat Generator > Opening Scenario Writer ``` Example: ``` > Text Editor --- Re-write the scenario in a dark fantasy philosophical style. ``` Example: ``` > Opening Scenario Writer Create an opening scene for Char, Char enters a coffee shop. > Text Editor Re-write Char's opening scenario, in a dark comedy style. ``` <img src="https://huggingface.co/Virt-io/SillyTavern-Presets/resolve/main/Images/Character_Cards_02.png"> # Samplers **I have decided to remove old samplers and only keep basic presets, I want people to play around and find what works best for them. Change context to desired context length** [SillyTavern Docs](https://docs.sillytavern.app/usage/common-settings/#sampler-parameters) **Temperature** Feel free to play with this one, lower values are more grounded. **Min-P** Higher values chop off more probabilities. Values between 0.025 - 0.10 are good, personally I would use 0.075 or lower. **Repetition Penalty** Tries to decrease repetition. Do not set it higher than 1.2. 1.05 - 1.15 seem to work fine. **Rep Pen Range** The range of tokens which Repetition Penalty can see. I have it set to 2048. **Frequency Penalty** Decreases repetition. **Presence Penalty** Increases word variety. **Dynamic Temperature** Min and Max temps, free to change as desired. Exponent, do not set Exponent higher than the default of 1. You might want to try playing around and setting it lower than 1, this pushes lower probabilies higher. When setting exponent lower than 1, set Min-P a little higher (0.075) **Smooth Sampling** This one is great, smoothens out probabilities. Lower is more diverse. Recommended range 0.1 - 0.3
EmanDev/cnn_news_summary_model_trained_on_reduced_data
EmanDev
2024-09-25T19:02:27Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Falconsai/text_summarization", "base_model:finetune:Falconsai/text_summarization", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T18:33:31Z
--- library_name: transformers license: apache-2.0 base_model: Falconsai/text_summarization tags: - generated_from_trainer metrics: - rouge model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7748 - Rouge1: 0.1505 - Rouge2: 0.1077 - Rougel: 0.1447 - Rougelsum: 0.1447 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 1 | 0.7866 | 0.1505 | 0.1077 | 0.1447 | 0.1447 | 19.0 | | No log | 2.0 | 2 | 0.7787 | 0.1505 | 0.1077 | 0.1447 | 0.1447 | 19.0 | | No log | 3.0 | 3 | 0.7748 | 0.1505 | 0.1077 | 0.1447 | 0.1447 | 19.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
tim-lawson/mlsae-pythia-70m-deduped-x256-k32-lens
tim-lawson
2024-09-25T19:00:11Z
7
0
mlsae
[ "mlsae", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "en", "license:mit", "region:us" ]
null
2024-09-25T18:59:55Z
--- language: en library_name: mlsae license: mit tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/tim-lawson/mlsae - Docs: [More Information Needed]
tim-lawson/mlsae-pythia-70m-deduped-x256-k32-lens-tfm
tim-lawson
2024-09-25T18:59:53Z
6
0
mlsae
[ "mlsae", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "en", "license:mit", "region:us" ]
null
2024-09-25T18:59:16Z
--- language: en library_name: mlsae license: mit tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/tim-lawson/mlsae - Docs: [More Information Needed]
Jovie/Robotics
Jovie
2024-09-25T18:59:05Z
20
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "region:us" ]
text-to-image
2024-09-23T17:45:32Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- closeup portrait photo of an Elysium Robot Cyborg Samurai, macro, a captivating vibrant dark capturing the essence of a cyborg Bedouin sorcerer in fight stance, Kneeling infront of japanese shire. ethereal, smoky backdrop. throwing a translucent red/tanslucent amber/black, weapon, katana, holding katana, atmospheric haze, Film grain, cinematic film still, shallow depth of field, highly detailed, high budget, cinemascope, moody, epic, OverallDetail, gorgeous, 2000s vintage RAW photo, photorealistic, candid camera, color graded cinematic, eye catchlights, atmospheric lighting, skin pores, imperfections, natural, shallow dof, output: url: images/example_bhiohvbzi.png base_model: black-forest-labs/FLUX.1-schnell instance_prompt: cyberpunk edgerunners --- # robotics model style <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jovie/Robotics/tree/main) them in the Files & versions tab.
mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF
mradermacher
2024-09-25T18:58:06Z
20
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "en", "dataset:airesearch/WangchanThaiInstruct", "base_model:Konthee/Llama-3.1-8B-ThaiInstruct", "base_model:quantized:Konthee/Llama-3.1-8B-ThaiInstruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-25T12:38:51Z
--- base_model: Konthee/Llama-3.1-8B-ThaiInstruct datasets: - airesearch/WangchanThaiInstruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Konthee/Llama-3.1-8B-ThaiInstruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ThaiInstruct-i1-GGUF/resolve/main/Llama-3.1-8B-ThaiInstruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
paultimothymooney
2024-09-25T18:57:20Z
18
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T18:57:09Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo paultimothymooney/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048 ```
harliwu/gemma_for_reddit_tldr-2B
harliwu
2024-09-25T18:55:29Z
127
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T18:32:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/Llama-3.2-1B-Instruct-GGUF
QuantFactory
2024-09-25T18:51:19Z
529
6
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T18:44:28Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 β€œAgreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. β€œDocumentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. β€œLicensee” or β€œyou” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. β€œLlama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. β€œLlama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. β€œMeta” or β€œwe” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking β€œI Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include β€œLlama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by MetaΒ  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementΒ  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software β€œbug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Llama-3.2-1B-Instruct-GGUF This is quantized version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) created using llama.cpp # Original Model Card ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Musab-Bin-Nadeem/cleaned_ds
Musab-Bin-Nadeem
2024-09-25T18:39:12Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-cnn-12-6", "base_model:finetune:sshleifer/distilbart-cnn-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T18:28:24Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-cnn-12-6 tags: - generated_from_trainer metrics: - rouge model-index: - name: cleaned_ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cleaned_ds This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2803 - Rouge1: 0.2705 - Rouge2: 0.0363 - Rougel: 0.1609 - Rougelsum: 0.1609 - Generated Length: 113.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 1 | 4.5060 | 0.2826 | 0.0384 | 0.1694 | 0.1694 | 95.0 | | No log | 2.0 | 2 | 4.3368 | 0.2832 | 0.0333 | 0.1701 | 0.1701 | 82.0 | | No log | 3.0 | 3 | 4.2803 | 0.2705 | 0.0363 | 0.1609 | 0.1609 | 113.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF
samurai-zero
2024-09-25T18:35:56Z
8
1
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T18:35:37Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo samurai-zero/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048 ```
QuantFactory/ArliAI-RPMax-Formax-v1-GGUF
QuantFactory
2024-09-25T18:30:16Z
11
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0", "base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0", "base_model:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1", "base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T17:35:11Z
--- base_model: - ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 library_name: transformers tags: - mergekit - merge --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/ArliAI-RPMax-Formax-v1-GGUF This is quantized version of [Guilherme34/ArliAI-RPMax-Formax-v1](https://huggingface.co/Guilherme34/ArliAI-RPMax-Formax-v1) created using llama.cpp # Original Model Card # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 merge_method: slerp base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
ItchyChin/OrpoLlama-3-8B-memorize-translate-retrain-merge-all-20240828
ItchyChin
2024-09-25T18:23:36Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-29T03:09:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Musab-Bin-Nadeem/cnn_news_summary_model_trained_on_reduced_data
Musab-Bin-Nadeem
2024-09-25T18:20:21Z
99
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-21T13:11:37Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6625 - Rouge1: 0.2171 - Rouge2: 0.0904 - Rougel: 0.1834 - Rougelsum: 0.1833 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | 1.984 | 1.0 | 574 | 1.6625 | 0.2171 | 0.0904 | 0.1834 | 0.1833 | 19.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
kholiavko/reception-llama-3.1-8b-test-5-2-gguf
kholiavko
2024-09-25T17:59:59Z
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T17:47:57Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** kholiavko - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/magnum-v3-27b-kto-GGUF
mradermacher
2024-09-25T17:52:06Z
206
1
transformers
[ "transformers", "gguf", "gemma-2", "en", "base_model:anthracite-org/magnum-v3-27b-kto", "base_model:quantized:anthracite-org/magnum-v3-27b-kto", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T21:56:43Z
--- base_model: anthracite-org/magnum-v3-27b-kto language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - gemma-2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anthracite-org/magnum-v3-27b-kto <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v3-27b-kto-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q2_K.gguf) | Q2_K | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.IQ3_XS.gguf) | IQ3_XS | 11.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.IQ3_S.gguf) | IQ3_S | 12.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.IQ3_M.gguf) | IQ3_M | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q5_K_S.gguf) | Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q5_K_M.gguf) | Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q6_K.gguf) | Q6_K | 22.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v3-27b-kto-GGUF/resolve/main/magnum-v3-27b-kto.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
downtown1/Qwen-Qwen1.5-0.5B-1727286683
downtown1
2024-09-25T17:51:28Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-09-25T17:51:23Z
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
justin-shopcapsule/screenshot-fashion-clip-finetuned-v2-t1
justin-shopcapsule
2024-09-25T17:50:34Z
236
0
transformers
[ "transformers", "pytorch", "clip", "zero-shot-image-classification", "generated_from_trainer", "dataset:cleaned_csv_local.csv", "base_model:patrickjohncyh/fashion-clip", "base_model:finetune:patrickjohncyh/fashion-clip", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-12-07T16:00:32Z
--- license: mit base_model: patrickjohncyh/fashion-clip tags: - generated_from_trainer datasets: - cleaned_csv_local.csv model-index: - name: screenshot-fashion-clip-finetuned-v2-t1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # screenshot-fashion-clip-finetuned-v2-t1 This model is a fine-tuned version of [patrickjohncyh/fashion-clip](https://huggingface.co/patrickjohncyh/fashion-clip) on the cleaned_csv_local.csv 2023 dataset. It achieves the following results on the evaluation set: - Loss: 2.1175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
finnstrom3693/rwkv-raven-1.5b
finnstrom3693
2024-09-25T17:48:47Z
76
0
transformers
[ "transformers", "safetensors", "rwkv", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T17:45:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
langtest/falcon-7b-sharded-bf16-finetuned-mental-health-hf-plus-dsm5rand200
langtest
2024-09-25T17:41:09Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-09-25T17:05:53Z
--- base_model: ybelkada/falcon-7b-sharded-bf16 library_name: peft tags: - trl - sft - generated_from_trainer model-index: - name: falcon-7b-sharded-bf16-finetuned-mental-health-hf-plus-dsm5rand200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-sharded-bf16-finetuned-mental-health-hf-plus-dsm5rand200 This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results ### Framework versions - PEFT 0.13.1.dev0 - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
finnstrom3693/phi-1-1.4b
finnstrom3693
2024-09-25T17:31:37Z
89
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T17:30:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cocktailpeanut/akira-schnell
cocktailpeanut
2024-09-25T17:31:21Z
5
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
text-to-image
2024-09-25T17:30:25Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/akira-schnell_001200_00_20240925132113.png text: a scene from akira, a 8 year old girl and an old woman walking across a street - output: url: sample/akira-schnell_001200_01_20240925132118.png text: a scene from akira, a guy eating ramen - output: url: sample/akira-schnell_001200_02_20240925132123.png text: a scene from akira, two people playing tennis base_model: black-forest-labs/FLUX.1-schnell instance_prompt: from akira license: apache-2.0 --- # akira-schnell A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `from akira` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
sam-fd/Qwen-Qwen2-1.5B-1727284880
sam-fd
2024-09-25T17:20:22Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-1.5B", "base_model:adapter:Qwen/Qwen2-1.5B", "region:us" ]
null
2024-09-25T17:20:19Z
--- base_model: Qwen/Qwen2-1.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf
RichardErkhov
2024-09-25T17:16:37Z
14
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-25T10:16:16Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.1-8b-instruct-armorm-judge-iter2 - GGUF - Model creator: https://huggingface.co/simonycl/ - Original model: https://huggingface.co/simonycl/llama-3.1-8b-instruct-armorm-judge-iter2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q2_K.gguf) | Q2_K | 2.96GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K.gguf) | Q3_K | 3.74GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K.gguf) | Q4_K | 4.58GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K.gguf) | Q5_K | 5.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q6_K.gguf) | Q6_K | 6.14GB | | [llama-3.1-8b-instruct-armorm-judge-iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-armorm-judge-iter2-gguf/blob/main/llama-3.1-8b-instruct-armorm-judge-iter2.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - alignment-handbook - generated_from_trainer datasets: - simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_2_rm_annotate model-index: - name: llama-3.1-8b-instruct-armorm-judge-iter2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3.1-8b-instruct-armorm-judge-iter2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_2_rm_annotate dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
SongTonyLi/OpenELM-3B-LoRA-128
SongTonyLi
2024-09-25T17:15:02Z
5
0
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-25T17:10:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SongTonyLi/OpenELM-1_1B-DPO-D1-HuggingFaceH4-ultrafeedback_binarized-Xlarge
SongTonyLi
2024-09-25T17:04:55Z
117
0
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "trl", "dpo", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-25T17:03:40Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/llm-jp-3-13b-instruct-8bit
mlx-community
2024-09-25T16:58:48Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-09-25T16:35:32Z
--- language: - en - ja library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mlx programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript inference: false --- # mlx-community/llm-jp-3-13b-instruct-8bit The Model [mlx-community/llm-jp-3-13b-instruct-8bit](https://huggingface.co/mlx-community/llm-jp-3-13b-instruct-8bit) was converted to MLX format from [llm-jp/llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) using mlx-lm version **0.17.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-3-13b-instruct-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
Shqipesinmilano/instagramgirl
Shqipesinmilano
2024-09-25T16:58:31Z
63
3
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-25T16:37:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: INSTAGRAMGIRL --- # Instagramgirl <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `INSTAGRAMGIRL` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Shqipesinmilano/instagramgirl', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
RichardErkhov/saishf_-_West-Hermes-7B-4bits
RichardErkhov
2024-09-25T16:56:58Z
5
0
null
[ "safetensors", "mistral", "arxiv:2311.03099", "arxiv:2306.01708", "4-bit", "bitsandbytes", "region:us" ]
null
2024-09-25T16:54:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) West-Hermes-7B - bnb 4bits - Model creator: https://huggingface.co/saishf/ - Original model: https://huggingface.co/saishf/West-Hermes-7B/ Original model description: --- license: apache-2.0 tags: - mergekit - merge base_model: - teknium/OpenHermes-2.5-Mistral-7B - mistralai/Mistral-7B-v0.1 - senseable/WestLake-7B-v2 model-index: - name: West-Hermes-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.67 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.6 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Hermes-7B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.53 weight: 0.75 - model: teknium/OpenHermes-2.5-Mistral-7B parameters: density: 0.53 weight: 0.25 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__West-Hermes-7B) | Metric |Value| |---------------------------------|----:| |Avg. |73.60| |AI2 Reasoning Challenge (25-Shot)|71.67| |HellaSwag (10-Shot) |87.60| |MMLU (5-Shot) |64.83| |TruthfulQA (0-shot) |64.26| |Winogrande (5-shot) |84.69| |GSM8k (5-shot) |68.54|
tanoManzo/gena-lm-bert-base-t2t-multi_ft_BioS73_1kbpHG19_DHSs_H3K27AC
tanoManzo
2024-09-25T16:56:41Z
8
0
null
[ "safetensors", "bert", "generated_from_trainer", "custom_code", "base_model:AIRI-Institute/gena-lm-bert-base-t2t-multi", "base_model:finetune:AIRI-Institute/gena-lm-bert-base-t2t-multi", "region:us" ]
null
2024-09-25T16:35:29Z
--- base_model: AIRI-Institute/gena-lm-bert-base-t2t-multi tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: gena-lm-bert-base-t2t-multi_ft_BioS73_1kbpHG19_DHSs_H3K27AC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gena-lm-bert-base-t2t-multi_ft_BioS73_1kbpHG19_DHSs_H3K27AC This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-base-t2t-multi](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t-multi) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6584 - F1 Score: 0.8725 - Precision: 0.8147 - Recall: 0.9392 - Accuracy: 0.8535 - Auc: 0.8951 - Prc: 0.8518 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:| | 0.6938 | 0.1864 | 500 | 0.6453 | 0.7873 | 0.7876 | 0.7870 | 0.7730 | 0.8514 | 0.8623 | | 0.6039 | 0.3727 | 1000 | 0.4934 | 0.8216 | 0.7733 | 0.8764 | 0.7969 | 0.8716 | 0.8609 | | 0.4786 | 0.5591 | 1500 | 0.4523 | 0.8358 | 0.7947 | 0.8813 | 0.8151 | 0.8667 | 0.8376 | | 0.4456 | 0.7454 | 2000 | 0.4439 | 0.8342 | 0.8304 | 0.8380 | 0.8222 | 0.8845 | 0.8642 | | 0.4328 | 0.9318 | 2500 | 0.4393 | 0.8402 | 0.8416 | 0.8387 | 0.8297 | 0.8905 | 0.8884 | | 0.4165 | 1.1182 | 3000 | 0.4482 | 0.8541 | 0.7988 | 0.9176 | 0.8327 | 0.8928 | 0.8847 | | 0.4133 | 1.3045 | 3500 | 0.4497 | 0.8546 | 0.7966 | 0.9218 | 0.8327 | 0.8546 | 0.7958 | | 0.4166 | 1.4909 | 4000 | 0.4378 | 0.8572 | 0.7975 | 0.9267 | 0.8353 | 0.9034 | 0.8878 | | 0.3901 | 1.6772 | 4500 | 0.4694 | 0.8544 | 0.8241 | 0.8869 | 0.8386 | 0.8856 | 0.8629 | | 0.3914 | 1.8636 | 5000 | 0.4448 | 0.8535 | 0.8075 | 0.9050 | 0.8341 | 0.8829 | 0.8425 | | 0.3634 | 2.0499 | 5500 | 0.5303 | 0.8551 | 0.8143 | 0.9001 | 0.8371 | 0.8903 | 0.8633 | | 0.4353 | 2.2363 | 6000 | 0.4807 | 0.8573 | 0.8034 | 0.9190 | 0.8367 | 0.8540 | 0.7970 | | 0.3747 | 2.4227 | 6500 | 0.4642 | 0.8589 | 0.8117 | 0.9120 | 0.8401 | 0.8839 | 0.8317 | | 0.4069 | 2.6090 | 7000 | 0.4868 | 0.8553 | 0.8290 | 0.8834 | 0.8405 | 0.8895 | 0.8494 | | 0.379 | 2.7954 | 7500 | 0.4786 | 0.8594 | 0.8217 | 0.9008 | 0.8427 | 0.8977 | 0.8673 | | 0.3817 | 2.9817 | 8000 | 0.5133 | 0.8606 | 0.8203 | 0.9050 | 0.8435 | 0.9106 | 0.8982 | | 0.3913 | 3.1681 | 8500 | 0.5098 | 0.8602 | 0.815 | 0.9106 | 0.8420 | 0.8636 | 0.8122 | | 0.3783 | 3.3545 | 9000 | 0.5139 | 0.8609 | 0.8221 | 0.9036 | 0.8442 | 0.8869 | 0.8522 | | 0.4031 | 3.5408 | 9500 | 0.5302 | 0.8628 | 0.8227 | 0.9071 | 0.8461 | 0.9093 | 0.8907 | | 0.3905 | 3.7272 | 10000 | 0.5376 | 0.8643 | 0.8109 | 0.9253 | 0.8449 | 0.9103 | 0.8868 | | 0.3893 | 3.9135 | 10500 | 0.5243 | 0.8615 | 0.8214 | 0.9057 | 0.8446 | 0.9162 | 0.9067 | | 0.3635 | 4.0999 | 11000 | 0.6047 | 0.8614 | 0.8528 | 0.8701 | 0.8505 | 0.9016 | 0.8770 | | 0.4137 | 4.2862 | 11500 | 0.5756 | 0.8567 | 0.8511 | 0.8624 | 0.8461 | 0.9134 | 0.8962 | | 0.3747 | 4.4726 | 12000 | 0.5890 | 0.8662 | 0.8254 | 0.9113 | 0.8498 | 0.8634 | 0.8047 | | 0.42 | 4.6590 | 12500 | 0.5722 | 0.8676 | 0.8267 | 0.9127 | 0.8513 | 0.8865 | 0.8309 | | 0.3957 | 4.8453 | 13000 | 0.5824 | 0.8665 | 0.8116 | 0.9295 | 0.8472 | 0.9139 | 0.9004 | | 0.3701 | 5.0317 | 13500 | 0.5850 | 0.8697 | 0.8278 | 0.9162 | 0.8535 | 0.8994 | 0.8633 | | 0.3653 | 5.2180 | 14000 | 0.6013 | 0.8697 | 0.8098 | 0.9392 | 0.8498 | 0.9012 | 0.8739 | | 0.3956 | 5.4044 | 14500 | 0.6008 | 0.8672 | 0.8464 | 0.8890 | 0.8546 | 0.8989 | 0.8667 | | 0.3767 | 5.5908 | 15000 | 0.6562 | 0.8594 | 0.8544 | 0.8645 | 0.8490 | 0.9128 | 0.9003 | | 0.3733 | 5.7771 | 15500 | 0.6134 | 0.8646 | 0.8472 | 0.8827 | 0.8524 | 0.9076 | 0.8824 | | 0.3798 | 5.9635 | 16000 | 0.6094 | 0.8701 | 0.8131 | 0.9358 | 0.8509 | 0.9110 | 0.8878 | | 0.3796 | 6.1498 | 16500 | 0.6068 | 0.8737 | 0.8315 | 0.9204 | 0.8580 | 0.8957 | 0.8634 | | 0.3674 | 6.3362 | 17000 | 0.6217 | 0.8727 | 0.8269 | 0.9239 | 0.8561 | 0.9120 | 0.8868 | | 0.3719 | 6.5225 | 17500 | 0.6346 | 0.8730 | 0.8413 | 0.9071 | 0.8591 | 0.9199 | 0.9018 | | 0.3754 | 6.7089 | 18000 | 0.6272 | 0.8719 | 0.8141 | 0.9385 | 0.8528 | 0.9038 | 0.8661 | | 0.3674 | 6.8953 | 18500 | 0.6343 | 0.8724 | 0.8285 | 0.9211 | 0.8561 | 0.9069 | 0.8874 | | 0.3511 | 7.0816 | 19000 | 0.6584 | 0.8725 | 0.8147 | 0.9392 | 0.8535 | 0.8951 | 0.8518 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
JeisonJimenez/Mistral-7b-retie-ser
JeisonJimenez
2024-09-25T16:55:23Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T16:51:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
l3xx/resul5757
l3xx
2024-09-25T16:44:30Z
25
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-19T20:25:19Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: R3Sul license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # resul5757 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `F4RID4` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
gabri3l/hospital-ai-nupe-to-yor
gabri3l
2024-09-25T16:40:37Z
127
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T16:40:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlexTereshkov/gemma2-promo-date-prediction-old
AlexTereshkov
2024-09-25T16:39:56Z
75
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T16:30:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meiiny00/whisper-medium-checkpoint-8000
meiiny00
2024-09-25T16:37:08Z
15
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:fsicoli/common_voice_18_0", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-25T16:00:25Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - fsicoli/common_voice_18_0 metrics: - wer model-index: - name: Whisper Medium New Train results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 18.0 type: fsicoli/common_voice_18_0 metrics: - name: Wer type: wer value: 2.2782892974889872 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium New Train This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 18.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0204 - Wer: 2.2783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2733 | 0.4077 | 1000 | 0.2585 | 32.5924 | | 0.1527 | 0.8153 | 2000 | 0.1246 | 16.7238 | | 0.0655 | 1.2230 | 3000 | 0.0776 | 10.5668 | | 0.0455 | 1.6307 | 4000 | 0.0514 | 6.7675 | | 0.0162 | 2.0383 | 5000 | 0.0353 | 4.4772 | | 0.0129 | 2.4460 | 6000 | 0.0274 | 3.4364 | | 0.0117 | 2.8536 | 7000 | 0.0220 | 2.5110 | | 0.0044 | 3.2613 | 8000 | 0.0204 | 2.2783 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.3.1 - Datasets 3.0.0 - Tokenizers 0.19.1
tanoManzo/gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC
tanoManzo
2024-09-25T16:28:20Z
5
0
null
[ "safetensors", "bert", "generated_from_trainer", "custom_code", "base_model:AIRI-Institute/gena-lm-bert-base-t2t-multi", "base_model:finetune:AIRI-Institute/gena-lm-bert-base-t2t-multi", "region:us" ]
null
2024-09-25T14:59:40Z
--- base_model: AIRI-Institute/gena-lm-bert-base-t2t-multi tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-base-t2t-multi](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t-multi) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4666 - F1 Score: 0.8442 - Precision: 0.8154 - Recall: 0.8751 - Accuracy: 0.8333 - Auc: 0.8995 - Prc: 0.8745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:| | 0.6931 | 0.0839 | 500 | 0.6427 | 0.7390 | 0.6242 | 0.9057 | 0.6699 | 0.7821 | 0.7840 | | 0.6219 | 0.1679 | 1000 | 0.5689 | 0.7390 | 0.7963 | 0.6893 | 0.7487 | 0.8086 | 0.7935 | | 0.5477 | 0.2518 | 1500 | 0.5181 | 0.7949 | 0.7556 | 0.8386 | 0.7767 | 0.8398 | 0.8123 | | 0.5125 | 0.3358 | 2000 | 0.4999 | 0.8031 | 0.7565 | 0.8559 | 0.7834 | 0.8496 | 0.8161 | | 0.4961 | 0.4197 | 2500 | 0.5177 | 0.8055 | 0.7647 | 0.8510 | 0.7879 | 0.8371 | 0.7976 | | 0.4969 | 0.5037 | 3000 | 0.4908 | 0.8153 | 0.7495 | 0.8936 | 0.7910 | 0.8582 | 0.8306 | | 0.4777 | 0.5876 | 3500 | 0.4991 | 0.8196 | 0.7549 | 0.8966 | 0.7963 | 0.8630 | 0.8307 | | 0.4836 | 0.6716 | 4000 | 0.4718 | 0.8230 | 0.7577 | 0.9005 | 0.8000 | 0.8608 | 0.8324 | | 0.4748 | 0.7555 | 4500 | 0.5299 | 0.7963 | 0.8009 | 0.7918 | 0.7910 | 0.8567 | 0.8069 | | 0.4667 | 0.8395 | 5000 | 0.4743 | 0.8241 | 0.7622 | 0.8969 | 0.8024 | 0.8715 | 0.8450 | | 0.4717 | 0.9234 | 5500 | 0.4981 | 0.8105 | 0.8091 | 0.8120 | 0.8041 | 0.8772 | 0.8542 | | 0.4707 | 1.0074 | 6000 | 0.4675 | 0.8273 | 0.7656 | 0.8998 | 0.8061 | 0.8751 | 0.8371 | | 0.459 | 1.0913 | 6500 | 0.4867 | 0.8192 | 0.8012 | 0.8380 | 0.8091 | 0.8778 | 0.8546 | | 0.4544 | 1.1753 | 7000 | 0.4712 | 0.8322 | 0.7557 | 0.9258 | 0.8073 | 0.8264 | 0.7430 | | 0.4324 | 1.2592 | 7500 | 0.4993 | 0.8185 | 0.8147 | 0.8224 | 0.8118 | 0.8687 | 0.8163 | | 0.436 | 1.3432 | 8000 | 0.4777 | 0.8352 | 0.7641 | 0.9209 | 0.8125 | 0.8185 | 0.7469 | | 0.4464 | 1.4271 | 8500 | 0.5148 | 0.8299 | 0.7497 | 0.9294 | 0.8034 | 0.8729 | 0.8419 | | 0.4537 | 1.5111 | 9000 | 0.4503 | 0.8296 | 0.8028 | 0.8582 | 0.8180 | 0.8796 | 0.8409 | | 0.4276 | 1.5950 | 9500 | 0.4540 | 0.8356 | 0.8014 | 0.8728 | 0.8227 | 0.8926 | 0.8680 | | 0.4323 | 1.6790 | 10000 | 0.4512 | 0.8380 | 0.7949 | 0.8861 | 0.8232 | 0.8748 | 0.8222 | | 0.4384 | 1.7629 | 10500 | 0.4724 | 0.8386 | 0.7655 | 0.9271 | 0.8158 | 0.8836 | 0.8405 | | 0.4076 | 1.8469 | 11000 | 0.4626 | 0.8335 | 0.8204 | 0.8471 | 0.8254 | 0.8813 | 0.8340 | | 0.439 | 1.9308 | 11500 | 0.4399 | 0.8443 | 0.7807 | 0.9193 | 0.8251 | 0.8888 | 0.8487 | | 0.4164 | 2.0148 | 12000 | 0.4522 | 0.8437 | 0.7820 | 0.9161 | 0.8249 | 0.8940 | 0.8548 | | 0.4075 | 2.0987 | 12500 | 0.4718 | 0.8417 | 0.8069 | 0.8796 | 0.8292 | 0.8962 | 0.8771 | | 0.406 | 2.1827 | 13000 | 0.4935 | 0.8233 | 0.8442 | 0.8035 | 0.8220 | 0.9000 | 0.8729 | | 0.3958 | 2.2666 | 13500 | 0.4891 | 0.8427 | 0.8172 | 0.8699 | 0.8324 | 0.8896 | 0.8443 | | 0.4353 | 2.3506 | 14000 | 0.4666 | 0.8442 | 0.8154 | 0.8751 | 0.8333 | 0.8995 | 0.8745 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
gabri3l/hospital-ai-nupe-to-eng
gabri3l
2024-09-25T16:28:08Z
127
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T16:27:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Praadnya/bart-3d-captions
Praadnya
2024-09-25T16:26:55Z
89
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T16:26:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/llm-jp-3-13b-instruct
mlx-community
2024-09-25T16:23:38Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-09-25T15:41:37Z
--- language: - en - ja library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mlx programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript inference: false --- # mlx-community/llm-jp-3-13b-instruct The Model [mlx-community/llm-jp-3-13b-instruct](https://huggingface.co/mlx-community/llm-jp-3-13b-instruct) was converted to MLX format from [llm-jp/llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) using mlx-lm version **0.17.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-3-13b-instruct") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF
hugging-quants
2024-09-25T16:15:26Z
25,862
13
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T15:04:08Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-1b-instruct-q4_k_m.gguf -c 2048 ```
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
hugging-quants
2024-09-25T16:14:40Z
416,214
26
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T15:40:22Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-q8_0.gguf -c 2048 ```
hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
hugging-quants
2024-09-25T16:12:08Z
885
19
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T15:01:51Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048 ```
hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF
hugging-quants
2024-09-25T16:11:19Z
22,637
44
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-25T15:41:36Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\nβ€œAgreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\nβ€œDocumentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \nβ€œLicensee” or β€œyou” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\nβ€œLlama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\nβ€œLlama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\nβ€œMeta” or β€œwe” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking β€œI Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display β€œBuilt with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include β€œLlama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a β€œNotice” text file distributed as a part of such copies: β€œLlama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ β€œLlama” (the β€œMark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (β€œ**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software β€œbug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048 ```
Dhurkesh1/potatoDiseaseClassifier
Dhurkesh1
2024-09-25T15:59:56Z
5
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2024-09-25T15:59:44Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: potatoDiseaseClassifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9968944191932678 --- # potatoDiseaseClassifier Autogenerated by HuggingPicsπŸ€—πŸ–ΌοΈ This model is designed for classifying [YOUR TASK] images. It was fine-tuned using PyTorch Lightning and Hugging Face transformers. ## Example Images #### Potato___Early_blight ![Potato___Early_blight](images/Potato___Early_blight.jpg) #### Potato___Late_blight ![Potato___Late_blight](images/Potato___Late_blight.jpg) #### Potato___healthy ![Potato___healthy](images/Potato___healthy.jpg)
tanoManzo/gena-lm-bert-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC
tanoManzo
2024-09-25T15:57:00Z
5
0
null
[ "safetensors", "bert", "generated_from_trainer", "custom_code", "base_model:AIRI-Institute/gena-lm-bert-base-t2t", "base_model:finetune:AIRI-Institute/gena-lm-bert-base-t2t", "region:us" ]
null
2024-09-25T15:56:43Z
--- base_model: AIRI-Institute/gena-lm-bert-base-t2t tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: gena-lm-bert-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gena-lm-bert-base-t2t_ft_BioS73_1kbpHG19_DHSs_H3K27AC This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-base-t2t](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5681 - F1 Score: 0.8606 - Precision: 0.8346 - Recall: 0.8883 - Accuracy: 0.8464 - Auc: 0.9057 - Prc: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:| | 0.6857 | 0.1864 | 500 | 0.6036 | 0.7971 | 0.7225 | 0.8890 | 0.7585 | 0.8247 | 0.7929 | | 0.5451 | 0.3727 | 1000 | 0.4777 | 0.8256 | 0.7793 | 0.8778 | 0.8021 | 0.8670 | 0.8418 | | 0.4643 | 0.5591 | 1500 | 0.4579 | 0.8296 | 0.8228 | 0.8366 | 0.8166 | 0.8780 | 0.8550 | | 0.4425 | 0.7454 | 2000 | 0.4760 | 0.8315 | 0.8307 | 0.8324 | 0.8200 | 0.8770 | 0.8551 | | 0.4363 | 0.9318 | 2500 | 0.4510 | 0.8457 | 0.8088 | 0.8862 | 0.8274 | 0.8755 | 0.8489 | | 0.4201 | 1.1182 | 3000 | 0.4354 | 0.8534 | 0.8062 | 0.9064 | 0.8338 | 0.8876 | 0.8626 | | 0.4138 | 1.3045 | 3500 | 0.4798 | 0.8504 | 0.8004 | 0.9071 | 0.8297 | 0.8874 | 0.8611 | | 0.4153 | 1.4909 | 4000 | 0.4520 | 0.8493 | 0.8177 | 0.8834 | 0.8327 | 0.9034 | 0.8996 | | 0.3919 | 1.6772 | 4500 | 0.4782 | 0.8514 | 0.8192 | 0.8862 | 0.8349 | 0.8972 | 0.8699 | | 0.3899 | 1.8636 | 5000 | 0.4710 | 0.8547 | 0.8058 | 0.9099 | 0.8349 | 0.8892 | 0.8532 | | 0.3779 | 2.0499 | 5500 | 0.5085 | 0.8549 | 0.8151 | 0.8987 | 0.8371 | 0.8910 | 0.8629 | | 0.4259 | 2.2363 | 6000 | 0.4850 | 0.8576 | 0.7992 | 0.9253 | 0.8360 | 0.8904 | 0.8549 | | 0.3808 | 2.4227 | 6500 | 0.4826 | 0.8530 | 0.7793 | 0.9420 | 0.8267 | 0.8805 | 0.8371 | | 0.3974 | 2.6090 | 7000 | 0.4625 | 0.8577 | 0.8101 | 0.9113 | 0.8386 | 0.9101 | 0.9002 | | 0.3948 | 2.7954 | 7500 | 0.5013 | 0.8547 | 0.8243 | 0.8876 | 0.8390 | 0.9022 | 0.8830 | | 0.3707 | 2.9817 | 8000 | 0.5439 | 0.8597 | 0.7881 | 0.9455 | 0.8353 | 0.8863 | 0.8461 | | 0.3831 | 3.1681 | 8500 | 0.5381 | 0.8622 | 0.7938 | 0.9434 | 0.8390 | 0.8403 | 0.7716 | | 0.3949 | 3.3545 | 9000 | 0.5503 | 0.8528 | 0.8219 | 0.8862 | 0.8367 | 0.9103 | 0.8892 | | 0.405 | 3.5408 | 9500 | 0.5180 | 0.8596 | 0.8090 | 0.9169 | 0.8401 | 0.9103 | 0.8895 | | 0.3759 | 3.7272 | 10000 | 0.5320 | 0.8581 | 0.8169 | 0.9036 | 0.8405 | 0.8719 | 0.8090 | | 0.3821 | 3.9135 | 10500 | 0.5786 | 0.8533 | 0.8326 | 0.875 | 0.8394 | 0.8952 | 0.8614 | | 0.3608 | 4.0999 | 11000 | 0.5681 | 0.8606 | 0.8346 | 0.8883 | 0.8464 | 0.9057 | 0.8788 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
qgallouedec/online-dpo-qwen2-2
qgallouedec
2024-09-25T15:53:43Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "online-dpo", "generated_from_trainer", "peft", "conversational", "dataset:dataset_name", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T10:44:50Z
--- base_model: base_model datasets: dataset_name library_name: transformers model_name: online-dpo-qwen2-2 tags: - trl - online-dpo - generated_from_trainer - peft licence: license --- # Model Card for online-dpo-qwen2-2 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt dataset.
gabri3l/hospital-ai-yor-to-eng
gabri3l
2024-09-25T15:52:35Z
127
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T15:52:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Vishwas1/hummingbird-base-architect
Vishwas1
2024-09-25T15:47:40Z
91
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T15:43:53Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: hummingbird-base-architect results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hummingbird-base-architect This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
LlamaFinetune/my_mini_llama_3
LlamaFinetune
2024-09-25T15:44:49Z
73
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T15:39:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hfwfjwjkj/bart_law_trained
hfwfjwjkj
2024-09-25T15:44:17Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T15:40:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qgallouedec/online-dpo-qwen2-3
qgallouedec
2024-09-25T15:39:27Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "online-dpo", "generated_from_trainer", "conversational", "dataset:dataset_name", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-25T15:27:22Z
--- base_model: Qwen/Qwen2-0.5B-Instruct datasets: dataset_name library_name: transformers model_name: online-dpo-qwen2-3 tags: - trl - online-dpo - generated_from_trainer licence: license --- # Model Card for online-dpo-qwen2-3 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt dataset.
dat-lequoc/vLLM-fast-apply-4bit-v0.2
dat-lequoc
2024-09-25T15:22:47Z
61
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Qwen2.5-Coder-7B-bnb-4bit", "base_model:quantized:unsloth/Qwen2.5-Coder-7B-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-25T15:19:47Z
--- base_model: unsloth/Qwen2.5-Coder-7B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- # Uploaded model - **Developed by:** quocdat25 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gabri3l/hospital-ai-en-nup-bi
gabri3l
2024-09-25T15:21:24Z
118
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-23T14:39:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]