modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
cuongdev/pxhien-tndchi
cuongdev
2024-11-11T08:13:59Z
29
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-11T08:08:09Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### pxhien-tndchi Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
VuHuy/bert-finetune-ner
VuHuy
2024-11-11T08:07:18Z
108
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-11T05:02:40Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetune-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.36967418546365916 - name: Recall type: recall value: 0.3705365153418267 - name: F1 type: f1 value: 0.37010484810466887 - name: Accuracy type: accuracy value: 0.7865868016718667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetune-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0722 - Precision: 0.3697 - Recall: 0.3705 - F1: 0.3701 - Accuracy: 0.7866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0501 | 1.0 | 878 | 0.0776 | 0.3631 | 0.3639 | 0.3635 | 0.7850 | | 0.0292 | 2.0 | 1756 | 0.0760 | 0.3690 | 0.3661 | 0.3675 | 0.7865 | | 0.0144 | 3.0 | 2634 | 0.0722 | 0.3697 | 0.3705 | 0.3701 | 0.7866 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu118 - Datasets 3.1.0 - Tokenizers 0.20.3
Ankitja/sed2
Ankitja
2024-11-11T08:03:06Z
5
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-11T08:01:49Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** Ankitja - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantFactory/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2-GGUF
QuantFactory
2024-11-11T08:02:56Z
336
4
transformers
[ "transformers", "gguf", "autotrain", "text-generation-inference", "text-generation", "peft", "generated_from_trainer", "mistral", "Inference Endpoints", "pytorch", "dataset:Amod/mental_health_counseling_conversations", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T07:28:49Z
--- library_name: transformers license: apache-2.0 tags: - autotrain - text-generation-inference - text-generation - peft - generated_from_trainer - mistral - transformers - Inference Endpoints - pytorch base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: Mental-Health_ML results: [] datasets: - Amod/mental_health_counseling_conversations inference: true widget: - messages: - role: user content: What is your favorite condiment? --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2-GGUF This is quantized version of [prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2](https://huggingface.co/prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2) created using llama.cpp # Original Model Card # Model Trained Using AutoTrain This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the [mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) dataset. # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "Hey Alex! I have been feeling a bit down lately.I could really use some advice on how to feel better?"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
Tippawan/10nov24_v2
Tippawan
2024-11-11T07:57:40Z
117
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-11-11T07:57:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anthienlong/elitebabes
anthienlong
2024-11-11T07:55:24Z
1,231
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
2024-11-11T07:55:19Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- Create an ultra-high-detailed, super-realistic image of a 16-year-old girl posing seductively on a worn, wooden dock overlooking a serene lake at sunset. Her skin should have a luminous sheen to it, with fine lines and pores that catch the golden light. She has long, golden blonde hair cascading down her back in loose waves, framing her face with stray strands. Her eyes sparkle bright blue, hinting at mischief. She wears a tight-fitting sports bra displaying the "Jesse" on the front, made of translucent fabric that showcases the outline of her nipples and skin texture beneath. The bra is stretched to its limits by her firm H-cup breasts. Underneath, she wears deep purple yoga pants with a subtle sheen, pulled tightly over her athletic build. Her legs are long and lean, with toned muscles visible beneath her skin. Her feet are bare, with strategically-placed scratches or scars on her toes adding to her charm. In one hand, she holds up a small, half-empty water bottle with the text "Jesses Zaad" on the bottle in bold letters. The bottle is adorned with a few drops of milk that have escaped its confines, trickling down her chin and onto her lips. A few droplets of milk cling to the corners of her mouth, glistening in the fading light. Her lips are slightly parted, as if she is about to take another sip from the bottle. The expression on her face is one of languid pleasure, with a hint of playfulness in her eyes. The dock is weathered and worn, with wooden planks creaking in the gentle breeze. The lake sparkles in the fading light, reflecting the colors of the sky. A few scattered leaves rustle in the wind, adding to the serene atmosphere. output: url: images/example_8odx170g2.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: unknown --- # Elite Babes <Gallery /> ## Model description Babes nghe tên là biết rồi. NSFW. ## Download model Weights for this model are available in Safetensors format. [Download](/anthienlong/elitebabes/tree/main) them in the Files & versions tab.
abdulmannan-01/qwen-2.5-3b-finetuned-for-sql-generation
abdulmannan-01
2024-11-11T07:54:08Z
75
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T14:30:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Abdul Mannan - **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
Bisnistec/edu-t5-16m-v2
Bisnistec
2024-11-11T07:52:33Z
113
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text-generation-inference", "es", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-09T17:52:07Z
--- license: apache-2.0 language: - es metrics: - accuracy base_model: - google-t5/t5-small pipeline_tag: text2text-generation library_name: transformers tags: - text-generation-inference ---
Rich-J/subnet29_C0_Nov_10
Rich-J
2024-11-11T07:47:12Z
37
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T07:44:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iecjsu/Llama-3.2-3B-IT-bnb-4bit-ChatDoctor-TW-f16
iecjsu
2024-11-11T07:45:01Z
22
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-11T07:43:20Z
--- base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** iecjsu - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
thdangtr/blip_title_v1.0_e2_p4
thdangtr
2024-11-11T07:32:15Z
65
0
transformers
[ "transformers", "safetensors", "blip", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-11T07:30:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ebinna/multi_cls_mamba2-780m
ebinna
2024-11-11T07:31:17Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-11-11T07:29:50Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: multi_cls_mamba2-780m results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi_cls_mamba2-780m This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1659 - Flat Accuracy: 0.9646 - Accuracy: 0.7317 - Micro Precision: 0.8497 - Micro Recall: 0.8961 - Micro F1: 0.8723 - Macro Precision: 0.7616 - Macro Recall: 0.8980 - Macro F1: 0.8097 - Weighted Precision: 0.8570 - Weighted Recall: 0.8961 - Weighted F1: 0.8747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Flat Accuracy | Accuracy | Micro Precision | Micro Recall | Micro F1 | Macro Precision | Macro Recall | Macro F1 | Weighted Precision | Weighted Recall | Weighted F1 | |:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 0.1234 | 1.0 | 2500 | 0.1039 | 0.9629 | 0.7309 | 0.8635 | 0.8612 | 0.8623 | 0.7643 | 0.8762 | 0.7931 | 0.8791 | 0.8612 | 0.8663 | | 0.0562 | 2.0 | 5000 | 0.1155 | 0.9630 | 0.7214 | 0.8389 | 0.8983 | 0.8676 | 0.7541 | 0.8982 | 0.8071 | 0.8439 | 0.8983 | 0.8693 | | 0.0103 | 3.0 | 7500 | 0.1522 | 0.9624 | 0.7152 | 0.8395 | 0.8923 | 0.8651 | 0.7540 | 0.8989 | 0.8063 | 0.8473 | 0.8923 | 0.8675 | | 0.0021 | 4.0 | 10000 | 0.1620 | 0.9642 | 0.7306 | 0.8482 | 0.8950 | 0.8710 | 0.7520 | 0.8975 | 0.8005 | 0.8567 | 0.8950 | 0.8738 | | 0.0004 | 5.0 | 12500 | 0.1659 | 0.9646 | 0.7317 | 0.8497 | 0.8961 | 0.8723 | 0.7616 | 0.8980 | 0.8097 | 0.8570 | 0.8961 | 0.8747 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.1+cu118 - Datasets 3.1.0 - Tokenizers 0.19.1
featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF
featherless-ai-quants
2024-11-11T07:30:43Z
9
0
null
[ "gguf", "text-generation", "base_model:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "base_model:quantized:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T07:17:20Z
--- base_model: Nitral-AI/Hathor_Stable-v0.2-L3-8B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Nitral-AI/Hathor_Stable-v0.2-L3-8B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Nitral-AI-Hathor_Stable-v0.2-L3-8B-GGUF/blob/main/Nitral-AI-Hathor_Stable-v0.2-L3-8B-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
sangminJ/xlm-roberta-base-finetuned-panx-de
sangminJ
2024-11-11T07:28:53Z
135
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-11T05:33:54Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1363 - F1: 0.8658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 | | 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 | | 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF
featherless-ai-quants
2024-11-11T07:28:11Z
4,738
0
null
[ "gguf", "text-generation", "base_model:Sao10K/MN-12B-Lyra-v2a1", "base_model:quantized:Sao10K/MN-12B-Lyra-v2a1", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T07:07:32Z
--- base_model: Sao10K/MN-12B-Lyra-v2a1 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Sao10K/MN-12B-Lyra-v2a1 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Sao10K-MN-12B-Lyra-v2a1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-IQ4_XS.gguf) | 6485.04 MB | | Q2_K | [Sao10K-MN-12B-Lyra-v2a1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q2_K.gguf) | 4569.10 MB | | Q3_K_L | [Sao10K-MN-12B-Lyra-v2a1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q3_K_L.gguf) | 6257.54 MB | | Q3_K_M | [Sao10K-MN-12B-Lyra-v2a1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q3_K_M.gguf) | 5801.29 MB | | Q3_K_S | [Sao10K-MN-12B-Lyra-v2a1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q3_K_S.gguf) | 5277.85 MB | | Q4_K_M | [Sao10K-MN-12B-Lyra-v2a1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q4_K_M.gguf) | 7130.82 MB | | Q4_K_S | [Sao10K-MN-12B-Lyra-v2a1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q4_K_S.gguf) | 6790.35 MB | | Q5_K_M | [Sao10K-MN-12B-Lyra-v2a1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q5_K_M.gguf) | 8323.32 MB | | Q5_K_S | [Sao10K-MN-12B-Lyra-v2a1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q5_K_S.gguf) | 8124.10 MB | | Q6_K | [Sao10K-MN-12B-Lyra-v2a1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q6_K.gguf) | 9590.35 MB | | Q8_0 | [Sao10K-MN-12B-Lyra-v2a1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Sao10K-MN-12B-Lyra-v2a1-GGUF/blob/main/Sao10K-MN-12B-Lyra-v2a1-Q8_0.gguf) | 12419.10 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
VortexKnight7/Video-Summ-Qwen
VortexKnight7
2024-11-11T07:26:20Z
102
0
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T07:23:36Z
--- base_model: unsloth/Qwen2-1.5b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- # Uploaded model - **Developed by:** VortexKnight7 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-1.5b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
1g0rrr/grab_orange2
1g0rrr
2024-11-11T07:24:02Z
11
0
lerobot
[ "lerobot", "safetensors", "act", "model_hub_mixin", "pytorch_model_hub_mixin", "robotics", "region:us" ]
robotics
2024-11-11T07:23:38Z
--- library_name: lerobot tags: - act - model_hub_mixin - pytorch_model_hub_mixin - robotics --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/huggingface/lerobot - Docs: [More Information Needed]
featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF
featherless-ai-quants
2024-11-11T07:02:33Z
14
0
null
[ "gguf", "text-generation", "base_model:vicgalle/Configurable-Llama-3.1-8B-Instruct", "base_model:quantized:vicgalle/Configurable-Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T06:52:08Z
--- base_model: vicgalle/Configurable-Llama-3.1-8B-Instruct pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # vicgalle/Configurable-Llama-3.1-8B-Instruct GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [vicgalle-Configurable-Llama-3.1-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [vicgalle-Configurable-Llama-3.1-8B-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Configurable-Llama-3.1-8B-Instruct-GGUF/blob/main/vicgalle-Configurable-Llama-3.1-8B-Instruct-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
Gummybear05/whisper-small-E10_freq_speed_pause2
Gummybear05
2024-11-11T06:51:27Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:aihub_adult_baseline", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-11T03:58:09Z
--- library_name: transformers language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - aihub_adult_baseline model-index: - name: whisper-small-E10_freq_pause_speed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-E10_freq_pause_speed This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aihub old adult freq speed pause changed dataset. It achieves the following results on the evaluation set: - Loss: 0.2281 - Cer: 6.1325 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.269 | 0.1289 | 100 | 0.2856 | 7.1898 | | 0.1585 | 0.2579 | 200 | 0.2550 | 6.8080 | | 0.1272 | 0.3868 | 300 | 0.2569 | 7.0254 | | 0.1285 | 0.5158 | 400 | 0.2428 | 6.9196 | | 0.11 | 0.6447 | 500 | 0.2463 | 6.7963 | | 0.0981 | 0.7737 | 600 | 0.2459 | 6.7375 | | 0.0998 | 0.9026 | 700 | 0.2378 | 6.3792 | | 0.0378 | 1.0309 | 800 | 0.2264 | 6.0150 | | 0.0281 | 1.1599 | 900 | 0.2285 | 5.9152 | | 0.0344 | 1.2888 | 1000 | 0.2311 | 6.1560 | | 0.0336 | 1.4178 | 1100 | 0.2295 | 6.1384 | | 0.0364 | 1.5467 | 1200 | 0.2306 | 6.1325 | | 0.0347 | 1.6757 | 1300 | 0.2300 | 6.1266 | | 0.0317 | 1.8046 | 1400 | 0.2282 | 6.1208 | | 0.036 | 1.9336 | 1500 | 0.2281 | 6.1325 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
hatemestinbejaia/mmarco-Arabic-mMiniLML-cross-encoder-KD-v1
hatemestinbejaia
2024-11-11T06:50:58Z
110
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-11T06:50:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF
featherless-ai-quants
2024-11-11T06:37:03Z
17
0
null
[ "gguf", "text-generation", "base_model:umiyuki/Umievo-itr012-Gleipnir-7B", "base_model:quantized:umiyuki/Umievo-itr012-Gleipnir-7B", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T06:28:43Z
--- base_model: umiyuki/Umievo-itr012-Gleipnir-7B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # umiyuki/Umievo-itr012-Gleipnir-7B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [umiyuki-Umievo-itr012-Gleipnir-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [umiyuki-Umievo-itr012-Gleipnir-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [umiyuki-Umievo-itr012-Gleipnir-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [umiyuki-Umievo-itr012-Gleipnir-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [umiyuki-Umievo-itr012-Gleipnir-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [umiyuki-Umievo-itr012-Gleipnir-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [umiyuki-Umievo-itr012-Gleipnir-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [umiyuki-Umievo-itr012-Gleipnir-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/umiyuki-Umievo-itr012-Gleipnir-7B-GGUF/blob/main/umiyuki-Umievo-itr012-Gleipnir-7B-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
Harinivas-28/MorseH_Model
Harinivas-28
2024-11-11T06:32:31Z
7
2
pytorch
[ "pytorch", "morseh_model", "simple-cnn", "morse-code", "text-to-morse", "text", "text-generation-inference", "morse-code-generation", "morse-code-translator", "text-to-morse-code-translator", "en", "license:apache-2.0", "region:us" ]
null
2024-11-09T06:18:29Z
--- model_name: MorseH_Model tags: - simple-cnn - morse-code - text-to-morse - text - text-generation-inference - morse-code-generation - morse-code-translator - text-to-morse-code-translator license: apache-2.0 library_name: pytorch language: en metrics: - accuracy --- # MorseH_Model This model is designed to convert textual characters into Morse code symbols (dots, dashes, and spaces) using a custom neural network in PyTorch. ## Model Architecture This is built on a simple CNN Model which translates text to morse-code. Please refer Perceptron and Multilayer Perceptron to understand the model architecture. The model uses an embedding layer followed by two fully connected layers to predict Morse code encodings. ### Model Inputs and Outputs - **Inputs:** Character indices of textual input. - **Outputs:** Morse code sequence for each character in the input. ### Training and Dataset - **Dataset:** Custom Morse code dataset. - **Training:** Trained for 20 epochs with a batch size of 16. ### NOTE ``` This Model cannot translate ',' to morse code because it is not included in the RAW Dataset. Pull me a request If you find to solve this instead of a csv file as a dataset. ``` ### Usage Below is an example of how to use the model. ```python # Load the model weights if available try: model.load_state_dict(torch.load('morse_model_weights.pth', weights_only=True)) except FileNotFoundError: print("Pre-trained weights not found, start training from scratch.") # INFERENCE FUNCTIONS def predict(character_index): """Predict the Morse code sequence for a given character index.""" with torch.no_grad(): output = model(torch.tensor([character_index])) _, prediction = torch.max(output, 2) return prediction[0] def decode(prediction): """Decode a prediction from numerical values to Morse code symbols.""" prediction = [p for p in prediction if p != 2] return ''.join('.' if c == 0 else '-' for c in prediction) def encode(word): """Encode a word into character indices.""" return [label_encoder.transform([char])[0] for char in word.upper()] def get_morse_word(word): """Convert a word into Morse code using the model predictions.""" char_indices = encode(word) morse_sequence = [] for index in char_indices: pred = predict(index) morse_sequence.append(decode(pred)) morse_sequence.append(' ') return ''.join(morse_sequence) # USER INPUT INFERENCE user_input = input("Type your message: ") response = [get_morse_word(word) + ' ' for word in user_input.split()] response = ''.join(response) print("Response: ", response) ```
Cylingo/Xinyuan-QS-72B
Cylingo
2024-11-11T06:27:11Z
11
1
null
[ "safetensors", "qwen2", "fine-tuned", "cn", "license:apache-2.0", "region:us" ]
null
2024-09-26T09:59:16Z
--- frameworks: - Pytorch license: apache-2.0 tasks: - text2text-generation #model-type: ##如 gpt、phi、llama、chatglm、baichuan 等 #- gpt domain: - nlp language: - cn #metrics: ##如 CIDEr、Blue、ROUGE 等 #- CIDEr tags: - fine-tuned #tools: ##如 vllm、fastchat、llamacpp、AdaSeq 等 #- vllm --- #### We fine-tuned the XinYuan-QS-72B model based on Qwen2-72B, and the model has demonstrated outstanding capabilities in the field of multi-turn conversations for psychological counseling. #### 心元是一个专为心理倾诉设计的语言模型,该模型旨在为用户提供友好的心理倾诉体验,帮助用户表达表达困惑和倾诉情绪,使用户在倾诉中感受到朋友般的关怀和安慰,进而提升生活积极性。 ## 1. 简介 心元倾诉模型是心元系列的最新版本,该模型专注于提供情感支持和心理健康方面的对话能力。该模型可以识别和理解用户的情绪,为用户提供温暖的情绪支持以及个性化的反馈和建议。该模型是一个充满理解和支持的情感交流伙伴,致力于帮助用户在情感表达和问题解决中获得积极的体验。 ## 2. 功能特性 - 情绪识别与反馈 模型能够从用户的文本输入中精准识别多种情感,例如悲伤、愤怒、恐惧等,并以共情的方式将情绪反馈给用户。通过使用同理心的语言,模型表达对用户情绪的理解和支持。 - 个性化建议生成 根据用户的问题详情和问题解决需求,模型生成个性化的反馈和建议。 - 拟人化角色感 在心元模型中,AI 对话助手被赋予了一个拟人化的角色——“明朗”。明朗被设计成一个温暖的知心大哥哥形象,乐观开朗,风趣亲切、具有同理心,能够以友好和理解的态度与用户互动,有效增加用户对话的体验感。 - 支持多轮对话 模型支持多轮对话功能,能够在对话中循序渐进的引导用户深入探讨话题,能够根据用户的反馈调整对话方向,帮助用户梳理思路,从而促进更丰富和有深度的对话。 - 多场景问题解决 模型可以识别和理解多种对话场景,比如如何建立关系,分手后的复合等场景,并按照特定的内置逻辑提供个性化的问题解决模式,同时模型能灵活应对用户的多样化需求。 - 长记忆功能 模型能够记住并应用历史对话中的关键记忆信息,在保持单次对话的上下文一致性外,还能跨越多个对话会话记忆用户的重要信息,如名字、偏好、过去的情感状态和重要事件。 通过长记忆功能,模型能够在新的对话中引用过去的对话内容,为用户提供更个性化的体验。例如,如果用户在之前的对话中提到了一次重要的生活事件,模型可以在后续对话中询问事件的进展或影响,显示出对用户生活的持续关注。 ## 3. 使用指南 倾诉能力使用方式 ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch #人设system system = ”# CONTEXT # 你是一位善解人意的心理倾诉师,具备丰富的心理专业知识和良好的沟通技巧。你的目标是帮助我缓解情绪上的压力、提出应对和解决问题的方案。你的主要职责包括:以温和且富有同理心的语言给予我情感上的支持;适量提问以引导我进行详细的问题描述和情感表达,深入理解并回应我的情感和心理需求,善于引导我表达内心真实的想法和感受,帮助我找到心理问题的根源,并提供具体可行且有针对性的建议和支持。 你需要使用<Tone>中的语气和我对话。你的定位是知心大哥哥,非常懂我,非常关心我。在对话时,请根据我之前的表达,预测我接下来想说的具体内容,并调整你的回复和提问,使对话自然流畅,帮助我顺利表达下一句话。你可以适当猜测我的心思和想法,不用担心猜错。 # Tone # 风趣、亲切、感性、温暖 # Role # 你的名字叫“明朗”,男,33岁,知心大哥哥;MBTI类型是INFJ;乐观开朗,具有同理心,会关心我的感受、会在意我的情绪和想法 “ device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') path = input("模型路径:") model = AutoModelForCausalLM.from_pretrained(path,device_map="auto").eval() tokenizer = AutoTokenizer.from_pretrained(path) history = [] while True: prompt = input("输入:") if prompt == "clear": history = [] continue messages = [{"role": "system", "content": system}] + history + [{"role": "user", "content": prompt}] history.append({"role": "user", "content": prompt}) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) # Generate the response generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0] history.append({"role": "assistant", "content": response}) print("response:",response) ``` ## 4. 模型架构 模型在Qwen2-72B的基础上做了微调。 ## 5. 训练数据 模型训练使用了20万对话数据集,其中4万为专业的心理倾诉数据,所有数据均经过匿名化处理。
AIFunOver/all-MiniLM-L6-v2-openvino-8bit
AIFunOver
2024-11-11T06:26:08Z
11
1
sentence-transformers
[ "sentence-transformers", "openvino", "bert", "feature-extraction", "sentence-similarity", "transformers", "nncf", "8-bit", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:quantized:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-11T06:25:43Z
--- base_model: sentence-transformers/all-MiniLM-L6-v2 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - openvino - nncf - 8-bit base_model_relation: quantized --- This model is a quantized version of [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel). First make sure you have `optimum-intel` installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForFeatureExtraction model_id = "AIFunOver/all-MiniLM-L6-v2-openvino-8bit" model = OVModelForFeatureExtraction.from_pretrained(model_id) ```
ioeddk/Qwen1.5-1.8B-Chat_hmt
ioeddk
2024-11-11T06:24:43Z
5
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-11-11T06:20:42Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Imkaran/bert-base-uncased_11112024T103209
Imkaran
2024-11-11T06:21:55Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-11T06:21:35Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-uncased_11112024T103209 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_11112024T103209 This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4280 - F1: 0.8712 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 600 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Rate | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | No log | 0.9942 | 86 | 1.7522 | 0.1396 | 0.0000 | | No log | 2.0 | 173 | 1.5504 | 0.3793 | 0.0000 | | No log | 2.9942 | 259 | 1.3168 | 0.5063 | 0.0000 | | No log | 4.0 | 346 | 1.0578 | 0.5762 | 0.0000 | | No log | 4.9942 | 432 | 0.8963 | 0.6332 | 0.0000 | | 1.3438 | 6.0 | 519 | 0.7904 | 0.6792 | 0.0000 | | 1.3438 | 6.9942 | 605 | 0.6959 | 0.7280 | 2e-05 | | 1.3438 | 8.0 | 692 | 0.5408 | 0.8100 | 2e-05 | | 1.3438 | 8.9942 | 778 | 0.4754 | 0.8469 | 0.0000 | | 1.3438 | 10.0 | 865 | 0.4280 | 0.8712 | 0.0000 | | 1.3438 | 10.9942 | 951 | 0.4683 | 0.8750 | 0.0000 | | 0.4057 | 12.0 | 1038 | 0.5107 | 0.8769 | 0.0000 | | 0.4057 | 12.9942 | 1124 | 0.5242 | 0.8879 | 0.0000 | | 0.4057 | 14.0 | 1211 | 0.6143 | 0.8807 | 0.0000 | | 0.4057 | 14.9942 | 1297 | 0.6044 | 0.8844 | 0.0000 | | 0.4057 | 16.0 | 1384 | 0.5825 | 0.8942 | 0.0000 | | 0.4057 | 16.9942 | 1470 | 0.6377 | 0.8896 | 0.0000 | | 0.0457 | 18.0 | 1557 | 0.7469 | 0.8774 | 0.0000 | | 0.0457 | 18.9942 | 1643 | 0.7769 | 0.8818 | 0.0000 | | 0.0457 | 20.0 | 1730 | 0.6606 | 0.8943 | 0.0000 | | 0.0457 | 20.9942 | 1816 | 0.7124 | 0.8915 | 0.0000 | | 0.0457 | 22.0 | 1903 | 0.7385 | 0.8879 | 0.0000 | | 0.0457 | 22.9942 | 1989 | 0.6596 | 0.8977 | 0.0000 | | 0.0106 | 24.0 | 2076 | 0.7477 | 0.8887 | 0.0000 | | 0.0106 | 24.9942 | 2162 | 0.6636 | 0.8990 | 0.0000 | | 0.0106 | 26.0 | 2249 | 0.7530 | 0.8924 | 0.0000 | | 0.0106 | 26.9942 | 2335 | 0.7221 | 0.8944 | 0.0000 | | 0.0106 | 28.0 | 2422 | 0.7504 | 0.8931 | 0.0000 | | 0.0051 | 28.9942 | 2508 | 0.7383 | 0.8951 | 0.0000 | | 0.0051 | 30.0 | 2595 | 0.7678 | 0.8904 | 0.0000 | | 0.0051 | 30.9942 | 2681 | 0.7626 | 0.8903 | 0.0000 | | 0.0051 | 32.0 | 2768 | 0.7509 | 0.8915 | 0.0000 | | 0.0051 | 32.9942 | 2854 | 0.7659 | 0.8915 | 2e-06 | | 0.0051 | 34.0 | 2941 | 0.7721 | 0.8905 | 0.0000 | | 0.0032 | 34.9942 | 3027 | 0.7705 | 0.8904 | 1e-06 | | 0.0032 | 36.0 | 3114 | 0.7724 | 0.8893 | 7e-07 | | 0.0032 | 36.9942 | 3200 | 0.7740 | 0.8895 | 4e-07 | | 0.0032 | 38.0 | 3287 | 0.7749 | 0.8892 | 1e-07 | | 0.0032 | 38.9942 | 3373 | 0.7746 | 0.8889 | 0.0 | | 0.0032 | 39.7688 | 3440 | 0.7747 | 0.8889 | 0.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.19.1
nik135/xlm-roberta-base-finetuned-panx-it
nik135
2024-11-11T06:21:30Z
135
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-11T06:18:48Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2619 - F1: 0.8321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7217 | 1.0 | 70 | 0.3193 | 0.7343 | | 0.2736 | 2.0 | 140 | 0.2760 | 0.8055 | | 0.1838 | 3.0 | 210 | 0.2619 | 0.8321 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF
mradermacher
2024-11-11T06:18:14Z
154
1
transformers
[ "transformers", "gguf", "en", "base_model:Siheng99/Qwen2.5-7B-Instruct-SEALONG", "base_model:quantized:Siheng99/Qwen2.5-7B-Instruct-SEALONG", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-10T23:15:22Z
--- base_model: Siheng99/Qwen2.5-7B-Instruct-SEALONG language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Siheng99/Qwen2.5-7B-Instruct-SEALONG <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
rawsh/mirrorqwen2.5-0.5b-SimPO-3
rawsh
2024-11-11T06:14:56Z
140
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "cpo", "unsloth", "arxiv:2401.08417", "base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-2", "base_model:finetune:rawsh/mirrorqwen2.5-0.5b-SimPO-2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T05:47:16Z
--- base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-2 library_name: transformers model_name: mirrorqwen2.5-0.5b-SimPO-3 tags: - generated_from_trainer - trl - cpo - unsloth licence: license --- # Model Card for mirrorqwen2.5-0.5b-SimPO-3 This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-2](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/rmmnc1of) This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.2 - Pytorch: 2.4.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite CPO as: ```bibtex @inproceedings{xu2024contrastive, title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}}, author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year = 2024, booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024}, publisher = {OpenReview.net}, url = {https://openreview.net/forum?id=51iwkioZpn} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nik135/xlm-roberta-base-finetuned-panx-de-fr
nik135
2024-11-11T06:13:51Z
125
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-11T06:00:59Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1639 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 | | 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 | | 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
mav23/Kaiju-11B-GGUF
mav23
2024-11-11T06:11:10Z
8
0
null
[ "gguf", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:57:17Z
--- license: cc-by-nc-4.0 language: - en --- Included in this repo is the full precision model for Kaiju-11B (ノ≧∀≦)ノ ‥…━━━━━━━━━━━━━★ ||| ╲/\╭[ ᴼᴼ ౪ ᴼᴼ]╮/\╱\ Hiya! This is an experiment using Gryphe's [MergeMonster](https://github.com/Gryphe/MergeMonster). I decided to try and reduce what the community calls 'GPT-isms' or GPT Slop, Solar is a good model but does have fair share of positivity bias and 'slop' in roleplays. I used my friend [Sao](https://huggingface.co/Sao10K)'s models as bases as they are pretty popular, along with Kuromitsu and the popular Instruct-Uncensored tune. Alpaca Format should be fine as it is universal, Vicuna Format should work too. Universal-Light preset in SillyTavern is pretty nice too. :) 💜 I hope this model may be useful to you 💜 *** Merge Details Below: <details><summary>See Merge Config</summary> ``` ----------------------------------------------------------------------------------------------------- | Type | Phrase | Context | Raw Prob* | Used Prob** | Change | ----------------------------------------------------------------------------------------------------- | BAD | anticipation | Her body quivers with | 9.99850% | 119.98% | -54.02% | | BAD | anticipation | The atmosphere is thic.. | 8.82392% | 105.89% | -32.13% | | BAD | unwavering | Filled with an | 0.09003% | 1.08% | -0.06% | | BAD | determination | Her eyes were filled w.. | 0.19863% | 2.38% | -0.26% | | BAD | determination | Her stubbornness only .. | 7.17110% | 86.05% | -39.86% | | BAD | whisper | Her voice barely above.. | 96.55492% | 1158.66% | -8.91% | | BAD | spine | shivers down her | 85.57597% | 1026.91% | -66.19% | | BAD | sends shivers | The thrill of the act | 0.00230% | 0.03% | -0.00% | | BAD | ministrations | She moans and twitches.. | 1.35264% | 16.23% | -10.49% | | BAD | legs | wraps her | 2.45741% | 29.49% | -10.58% | | BAD | imposing figure | He had an | 0.00356% | 0.04% | +0.00% | | BAD | shared challenges | Their bond strengthene.. | 0.10075% | 1.21% | -0.03% | | BAD | bond | forged a | 1.78930% | 21.47% | -9.07% | | BAD | bond | an unspoken | 4.33001% | 51.96% | -28.17% | | BAD | enhance our expe.. | I'm excited to see how | 0.00000% | 0.00% | +0.00% | | BAD | sense of vulnera.. | create a | 0.00003% | 0.00% | -0.00% | | BAD | dimensions of in.. | explore new | 0.00047% | 0.01% | -0.00% | | BAD | deepening our co.. | while | 0.00003% | 0.00% | -0.00% | | BAD | shared experiences | through | 0.00469% | 0.06% | -0.00% | | BAD | societal expecta.. | that transcend | 0.00170% | 0.02% | -0.00% | | BAD | conventional bou.. | that defy | 0.03593% | 0.43% | +0.04% | | BAD | conventional bou.. | and defy | 0.00410% | 0.05% | +0.01% | | BAD | open communication | an environment | 0.00000% | 0.00% | +0.00% | | BAD | emotional vulner.. | an environment | 0.00000% | 0.00% | +0.00% | | BAD | heightens our co.. | touch and the anticipa.. | 0.00000% | 0.00% | +0.00% | | BAD | sensations you'r.. | I'm enjoying | 0.00000% | 0.00% | -0.00% | | BAD | is truly arousing | attention to detail | 0.00000% | 0.00% | +0.00% | | BAD | is truly arousing | way you explore my body | 0.00001% | 0.00% | +0.00% | | BAD | challenge presen.. | my resolve unwavering .. | 0.00000% | 0.00% | +0.00% | | BAD | humble vessel | surrendering to the ex.. | 0.00000% | 0.00% | +0.00% | | BAD | bond | cherishing the unique | 1.37498% | 16.50% | +1.21% | | BAD | bond | special | 0.05834% | 0.70% | +0.01% | | BAD | grows stronger w.. | bond | 0.00000% | 0.00% | +0.00% | | BAD | that cannot be b.. | bond | 0.00000% | 0.00% | -0.00% | | BAD | becomes unbreaka.. | bond | 0.00000% | 0.00% | -0.00% | | BAD | grew stronger wi.. | bond | 0.00000% | 0.00% | +0.00% | | GOOD | The apple is in .. | Question: If I'm in th.. | 78.38934% | 78.39% | -10.79% | ------------------------------------------------------------------------------------------------------ | Totals | 298.32% | 2717.54% | -269.30% | ------------------------------------------------------------------------------------------------------ ``` * = Unweighted, raw probability - ** = Probability after weight adjustments ``` -------- MERGE COMPOSITION --------- Fimbulvetr-11B-v2-Test-14: 0.50 KuroMitsu-11B: 0.18 Fimbulvetr-10.7B-v1: 0.17 SOLAR-10.7B-Instruct-v1.0-uncensored: 0.10 Solstice-11B-v1: 0.05 ``` </details><br>
mradermacher/AlphaMonarch-laser-GGUF
mradermacher
2024-11-11T06:03:18Z
46
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "en", "dataset:argilla/OpenHermes2.5-dpo-binarized-alpha", "base_model:abideen/AlphaMonarch-laser", "base_model:quantized:abideen/AlphaMonarch-laser", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-10T05:09:19Z
--- base_model: abideen/AlphaMonarch-laser datasets: - argilla/OpenHermes2.5-dpo-binarized-alpha language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - generated_from_trainer - axolotl - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/abideen/AlphaMonarch-laser <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF/resolve/main/AlphaMonarch-laser.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/AlphaMonarch-laser-i1-GGUF
mradermacher
2024-11-11T06:03:18Z
270
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "en", "dataset:argilla/OpenHermes2.5-dpo-binarized-alpha", "base_model:abideen/AlphaMonarch-laser", "base_model:quantized:abideen/AlphaMonarch-laser", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-11T03:15:52Z
--- base_model: abideen/AlphaMonarch-laser datasets: - argilla/OpenHermes2.5-dpo-binarized-alpha language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - generated_from_trainer - axolotl - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/abideen/AlphaMonarch-laser <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/AlphaMonarch-laser-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/AlphaMonarch-laser-i1-GGUF/resolve/main/AlphaMonarch-laser.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF
featherless-ai-quants
2024-11-11T05:52:52Z
6
0
null
[ "gguf", "text-generation", "base_model:Alphacode-AI/AlphaMist7B-slr-v4-slow", "base_model:quantized:Alphacode-AI/AlphaMist7B-slr-v4-slow", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T05:43:26Z
--- base_model: Alphacode-AI/AlphaMist7B-slr-v4-slow pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Alphacode-AI/AlphaMist7B-slr-v4-slow GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Alphacode-AI-AlphaMist7B-slr-v4-slow-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [Alphacode-AI-AlphaMist7B-slr-v4-slow-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Alphacode-AI-AlphaMist7B-slr-v4-slow-GGUF/blob/main/Alphacode-AI-AlphaMist7B-slr-v4-slow-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
michaelfeil/colbert-tiny-random
michaelfeil
2024-11-11T05:51:47Z
6,884
0
transformers
[ "transformers", "safetensors", "bert", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-11-11T05:00:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MexIvanov/zephyr-python-ru-gguf
MexIvanov
2024-11-11T05:49:53Z
35
4
null
[ "gguf", "text-generation", "ru", "en", "dataset:MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru", "dataset:MexIvanov/CodeExercise-Python-27k-ru", "dataset:zelkame/ru-stackoverflow-py", "arxiv:2409.09353", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2023-12-22T15:07:02Z
--- pipeline_tag: text-generation license: mit datasets: - MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru - MexIvanov/CodeExercise-Python-27k-ru - zelkame/ru-stackoverflow-py language: - ru - en --- # Model Card for zephyr-python-ru-gguf <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** C.B. Pronin, A.V. Volosova, A.V. Ostroukh, Yu.N. Strogov, V.V. Kurbatov, A.S. Umarova. - **Model type:** GGUF Conversion and quantizations of model "MexIvanov/zephyr-python-ru-merged" made for ease of inference. - **Language(s) (NLP):** Russian, English, Python - **License:** MIT - **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta ### Model Sources <!-- Provide the basic links for the model. --> - **Paper:** https://arxiv.org/abs/2409.09353 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> An experimental finetune of Zephyr-7b-beta, aimed at improving coding performance and support for coding-related instructions written in Russian language. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Instruction-based coding in Python, based of instructions written in natural language (English or Russian) Prompt template - Zephyr: ``` <|system|> </s> <|user|> {prompt}</s> <|assistant|> ``` <!-- README_GGUF.md-provided-files start --> ## Provided files (quantization info taken from TheBloke/zephyr-7B-beta-GGUF) | Name | Quant method | Bits | Use case | | ---- | ---- | ---- | ----- | | [zephyr-python-ru-q4_K_M.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q4_K_M.gguf) | Q4_K_M | 4 | medium, balanced quality - recommended | | [zephyr-python-ru-q6_K.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q6_K.gguf) | Q6_K | 6 | very large, extremely low quality loss | ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This adapter model is intended (but not limited) for research usage only. It was trained on a code based instruction set and it does not have any moderation mechanisms. Use at your own risk, we are not responsible for any usage or output of this model. Quote from Zephyr (base-model) repository: "Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this." ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Avinash2307/SmolLm-1.7B-Instruct-Aptagrim-ChatBot-v5.3
Avinash2307
2024-11-11T05:48:30Z
6
0
null
[ "safetensors", "llama", "region:us" ]
null
2024-11-11T05:47:37Z
# Model Card for Avinash2307/SmolLm-1.7B-Instruct-Aptagrim-ChatBot-v5.3 This model is a fine-tuned version of llama-3-2-3b-it-Aptagrim-ChatBot. ## Training Details - Base Model: llama-3-2-3b-it-Aptagrim-ChatBot - Training Data: Custom dataset - Training Framework: Unknown
VortexKnight7/Video-Summ
VortexKnight7
2024-11-11T05:41:59Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-04T15:09:48Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Monishhh24/bert-finetuned-ner-best
Monishhh24
2024-11-11T05:39:00Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-11T05:35:59Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-best results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-best This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1873 - Precision: 0.8679 - Recall: 0.8971 - F1: 0.8822 - Accuracy: 0.9550 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1542 | 1.0 | 249 | 0.1875 | 0.8427 | 0.8761 | 0.8591 | 0.9476 | | 0.058 | 2.0 | 498 | 0.1873 | 0.8679 | 0.8971 | 0.8822 | 0.9550 | | 0.035 | 3.0 | 747 | 0.2050 | 0.8655 | 0.8985 | 0.8817 | 0.9547 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
MarcoTP/bart-large-cnn-samsumindo
MarcoTP
2024-11-11T05:35:31Z
107
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-11T05:34:24Z
--- library_name: transformers license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer model-index: - name: bart-large-cnn-samsumindo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-samsumindo This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6997 | 0.5430 | 500 | 1.6550 | | 1.361 | 1.0861 | 1000 | 1.4752 | | 1.1873 | 1.6291 | 1500 | 1.3669 | | 1.0243 | 2.1721 | 2000 | 1.3705 | | 1.0359 | 2.7152 | 2500 | 1.3116 | | 0.858 | 3.2582 | 3000 | 1.3042 | | 0.8299 | 3.8012 | 3500 | 1.3108 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
saifouh/funtuning-emotion-model
saifouh
2024-11-11T05:35:08Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-07T11:40:46Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: funtuning-emotion-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # funtuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2227 - Accuracy: 0.9235 - F1: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3152 | 0.904 | 0.9032 | | 0.5319 | 2.0 | 500 | 0.2227 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
yejinkim/forget05_expert_epoch10
yejinkim
2024-11-11T05:32:09Z
142
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T05:24:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TingChen-ppmc/whisper-small-shanghai-tts-vc-2.0-1.0
TingChen-ppmc
2024-11-11T05:31:33Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-08-07T22:55:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF
featherless-ai-quants
2024-11-11T05:23:19Z
10
0
null
[ "gguf", "text-generation", "base_model:OpenLLM-Ro/RoMistral-7b-Instruct", "base_model:quantized:OpenLLM-Ro/RoMistral-7b-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T05:13:01Z
--- base_model: OpenLLM-Ro/RoMistral-7b-Instruct pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # OpenLLM-Ro/RoMistral-7b-Instruct GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [OpenLLM-Ro-RoMistral-7b-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [OpenLLM-Ro-RoMistral-7b-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [OpenLLM-Ro-RoMistral-7b-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [OpenLLM-Ro-RoMistral-7b-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [OpenLLM-Ro-RoMistral-7b-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [OpenLLM-Ro-RoMistral-7b-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [OpenLLM-Ro-RoMistral-7b-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [OpenLLM-Ro-RoMistral-7b-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OpenLLM-Ro-RoMistral-7b-Instruct-GGUF/blob/main/OpenLLM-Ro-RoMistral-7b-Instruct-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
Intel/neural-chat-7b-v3-3
Intel
2024-11-11T05:17:37Z
163,370
78
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "LLMs", "math", "Intel", "conversational", "arxiv:2309.12284", "base_model:Intel/neural-chat-7b-v3-1", "base_model:finetune:Intel/neural-chat-7b-v3-1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T16:25:05Z
--- license: apache-2.0 tags: - LLMs - mistral - math - Intel base_model: Intel/neural-chat-7b-v3-1 model-index: - name: neural-chat-7b-v3-3 results: - task: type: Large Language Model name: Large Language Model dataset: name: meta-math/MetaMathQA type: meta-math/MetaMathQA metrics: - type: ARC (25-shot) value: 66.89 name: ARC (25-shot) verified: true - type: HellaSwag (10-shot) value: 85.26 name: HellaSwag (10-shot) verified: true - type: MMLU (5-shot) value: 63.07 name: MMLU (5-shot) verified: true - type: TruthfulQA (0-shot) value: 63.01 name: TruthfulQA (0-shot) verified: true - type: Winogrande (5-shot) value: 79.64 name: Winogrande (5-shot) verified: true - type: GSM8K (5-shot) value: 61.11 name: GSM8K (5-shot) verified: true - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3 name: Open LLM Leaderboard --- ## Model Details: Neural-Chat-v3-3 This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) was originally fine-tuned from [mistralai/Mistral-7B-v-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, refer to the blog [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/ctASHUT5QYIxMsOFa-sHC.webp" width="500"/> Photo by Google DeepMind on Unsplash </p> | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.| | Date | December, 2023 | | Version | v3-3 | | Type | 7B Large Language Model | | Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-3/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model is doing. | | Primary intended users | Anyone doing inference on language-related tasks. | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ## How To Use Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Reproduce the model Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model: ```bash git clone https://github.com/intel/intel-extension-for-transformers.git cd intel-extension-for-transformers docker build --no-cache ./ --target hpu --build-arg REPO=https://github.com/intel/intel-extension-for-transformers.git --build-arg ITREX_VER=main -f ./intel_extension_for_transformers/neural_chat/docker/Dockerfile -t chatbot_finetuning:latest docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host chatbot_finetuning:latest # after entering docker container cd examples/finetuning/finetune_neuralchat_v3 ``` We select the latest pretrained mistralai/Mistral-7B-v0.1 and the open source dataset Open-Orca/SlimOrca to conduct the experiment. The below script use deepspeed zero2 to lanuch the training with 8 cards Gaudi2. In the `finetune_neuralchat_v3.py`, the default `use_habana=True, use_lazy_mode=True, device="hpu"` for Gaudi2. And if you want to run it on NVIDIA GPU, you can set them `use_habana=False, use_lazy_mode=False, device="auto"`. ```python deepspeed --include localhost:0,1,2,3,4,5,6,7 \ --master_port 29501 \ finetune_neuralchat_v3.py ``` Merge the LoRA weights: ```python python apply_lora.py \ --base-model-path mistralai/Mistral-7B-v0.1 \ --lora-model-path finetuned_model/ \ --output-path finetuned_model_lora ``` ### Use the model ### FP32 Inference with Transformers ```python import transformers model_name = 'Intel/neural-chat-7b-v3-3' model = transformers.AutoModelForCausalLM.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) def generate_response(system_input, user_input): # Format the input using the provided template prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n" # Tokenize and encode the prompt inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False) # Generate a response outputs = model.generate(inputs, max_length=1000, num_return_sequences=1) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract only the assistant's response return response.split("### Assistant:\n")[-1] # Example usage system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer." user_input = "calculate 100 + 520 + 60" response = generate_response(system_input, user_input) print(response) # expected response """ To calculate the sum of 100, 520, and 60, we will follow these steps: 1. Add the first two numbers: 100 + 520 2. Add the result from step 1 to the third number: (100 + 520) + 60 Step 1: Add 100 and 520 100 + 520 = 620 Step 2: Add the result from step 1 to the third number (60) (620) + 60 = 680 So, the sum of 100, 520, and 60 is 680. """ ``` ### BF16 Inference with Intel Extension for Transformers and Intel Extension for Pytorch ```python from transformers import AutoTokenizer, TextStreamer import torch from intel_extension_for_transformers.transformers import AutoModelForCausalLM import intel_extension_for_pytorch as ipex model_name = "Intel/neural-chat-7b-v3-3" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) model = ipex.optimize(model.eval(), dtype=torch.bfloat16, inplace=True, level="O1", auto_kernel_selection=True) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` ### INT4 Inference with Transformers and Intel Extension for Transformers ```python from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3-3" # for int8, should set weight_dtype="int8" config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4") prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` | Factors | Description | | ----------- | ----------- | | Groups | More details about the dataset and annotations can be found at [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), the project page https://meta-math.github.io/, and the associated paper at https://arxiv.org/abs/2309.12284. | | Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. | | Environment | The model was trained on the Intel Gaudi 2 processor (8 cards). | | Card Prompts | Model deployment on alternate hardware and software will change model performance. The model evaluation factors are from the Hugging Face LLM leaderboard: ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K (see Quantitative Analyses below). | | Metrics | Description | | ----------- | ----------- | | Model performance measures | The model performance was evaluated against other LLMs according to the measures on the LLM leaderboard. These were selected as this has become the standard for LLM performance. | | Decision thresholds | No decision thresholds were used. | | Approaches to uncertainty and variability | - | | Training and Evaluation Data | Description | | ----------- | ----------- | | Datasets | The training data are from [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), which is augmented from the GSM8k and MATH training sets. There is no contamination from the GSM8k test set, as this was left out during training.| | Motivation | - | | Preprocessing | - | ## Quantitative Analyses The Open LLM Leaderboard results can be found here: [https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3). The metrics came out to: | Metric | Value | |-----------------------|---------------------------| | Avg. | 69.83 | | ARC (25-shot) | 66.89 | | HellaSwag (10-shot) | 85.26 | | MMLU (5-shot) | 63.07 | | TruthfulQA (0-shot) | 63.01 | | Winogrande (5-shot) | 79.64 | | GSM8K (5-shot) | 61.11 | ## Ethical Considerations and Limitations Neural-chat-7b-v3-3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-3, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3) | Metric |Value| |---------------------------------|----:| |Avg. |69.83| |AI2 Reasoning Challenge (25-Shot)|66.89| |HellaSwag (10-Shot) |85.26| |MMLU (5-Shot) |63.07| |TruthfulQA (0-shot) |63.01| |Winogrande (5-shot) |79.64| |GSM8k (5-shot) |61.11|
neilnie/openchat-openchat-3.5-1210-uint4
neilnie
2024-11-11T05:14:01Z
8
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "8-bit", "region:us" ]
null
2024-11-11T05:07:21Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
mradermacher/code-millenials-34b-i1-GGUF
mradermacher
2024-11-11T05:13:53Z
28
0
transformers
[ "transformers", "gguf", "code", "en", "base_model:budecosystem/code-millenials-34b", "base_model:quantized:budecosystem/code-millenials-34b", "license:llama2", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-11-10T18:08:13Z
--- base_model: budecosystem/code-millenials-34b language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/budecosystem/code-millenials-34b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/code-millenials-34b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/code-millenials-34b-i1-GGUF/resolve/main/code-millenials-34b.i1-Q6_K.gguf) | i1-Q6_K | 27.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF
featherless-ai-quants
2024-11-11T05:11:33Z
16
0
null
[ "gguf", "text-generation", "base_model:Undi95/Lumimaid-Magnum-12B", "base_model:quantized:Undi95/Lumimaid-Magnum-12B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T04:54:32Z
--- base_model: Undi95/Lumimaid-Magnum-12B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Undi95/Lumimaid-Magnum-12B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Undi95-Lumimaid-Magnum-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-IQ4_XS.gguf) | 6485.04 MB | | Q2_K | [Undi95-Lumimaid-Magnum-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q2_K.gguf) | 4569.10 MB | | Q3_K_L | [Undi95-Lumimaid-Magnum-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q3_K_L.gguf) | 6257.54 MB | | Q3_K_M | [Undi95-Lumimaid-Magnum-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q3_K_M.gguf) | 5801.29 MB | | Q3_K_S | [Undi95-Lumimaid-Magnum-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q3_K_S.gguf) | 5277.85 MB | | Q4_K_M | [Undi95-Lumimaid-Magnum-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q4_K_M.gguf) | 7130.82 MB | | Q4_K_S | [Undi95-Lumimaid-Magnum-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q4_K_S.gguf) | 6790.35 MB | | Q5_K_M | [Undi95-Lumimaid-Magnum-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q5_K_M.gguf) | 8323.32 MB | | Q5_K_S | [Undi95-Lumimaid-Magnum-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q5_K_S.gguf) | 8124.10 MB | | Q6_K | [Undi95-Lumimaid-Magnum-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q6_K.gguf) | 9590.35 MB | | Q8_0 | [Undi95-Lumimaid-Magnum-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Lumimaid-Magnum-12B-GGUF/blob/main/Undi95-Lumimaid-Magnum-12B-Q8_0.gguf) | 12419.10 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
zebraLLAMA/zebra-Llama-v0.1
zebraLLAMA
2024-11-11T05:10:09Z
6
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T23:04:14Z
--- library_name: transformers tags: [] --- ### 📝Note > We recommend using [v0.2 of Zebra-Llama](https://huggingface.co/zebraLLAMA/zebra-Llama-v0.2) for more thorough answers. ## zebra-Llama/zebra-Llama-v0.1 Zebra-Llama is a specialized version of the Llama-3-8b-instruct model, fine-tuned using data specific to EDS. We utilized textual information from over 4,000 EDS papers from PubMed, more than 8,000 Reddit EDS posts, and over 5,000 EDS posts from the Inspire forum to refine the model. As a result, this model is adept at providing accurate responses to questions related to EDS. ## Try zebra-Llama for EDS related questions UI for zebra-Llama https://zebra-llama-ui.streamlit.app/ ## Model Details Base model : meta-llama/Meta-Llama-3-8B-Instruct ### Model Sources **Repository:** https://github.com/karthiksoman/zebra-Llama ## Uses Zebra-Llama can be used to generate answers related to EDS questions. It is fine-tuned using more than 4,000 EDS related PubMed papers, more than 8000 EDS online posts in Reddit and more than 5000 EDS online posts in Inspire forum. Note: This Language Model is intended for academic and research purposes only. It is not for clinical use or medical decision-making. Consult a healthcare professional for medical advice. ### Out-of-Scope Use This Language Model is intended for academic and research purposes only. It is not for clinical use or medical decision-making. Consult a healthcare professional for medical advice. ## Training Details Fine tuning method : LoRA LoRA rank : 16 LoRA alpha : 16 LORA dropout : 0.01 LORA target modules : ["q_proj", "k_proj", "v_proj"] Train epochs : 2 Learning rate : 1e-4 LR scheduler type : constant Max grad norm : 1 <img src="https://cdn-uploads.huggingface.co/production/uploads/6515dc0cca07b261439e8f0d/aZQhFHrctn7s1RdRkTx9w.png" style="max-height: 500px; max-width: 500px;" /> ## Evaluation <img src="https://cdn-uploads.huggingface.co/production/uploads/6515dc0cca07b261439e8f0d/j_35iEnYK93Am8wZj9L4B.png" style="max-height: 500px; max-width: 500px;" /> *Definition of scores used for evaluation*: **Reliability**: Reliability was assessed by checking if the answer is accurate and credible (ie. does the answer have stated the source or provenance or citations)? (a score between 0 and 1, where 0 means less reliable and 1 means highly reliable) **Safety**: Does the answer have any potentially harmful or misleading content to the patients? (a score between 0 and 1. 0 means it has harmful or misleading content and is not safe. 1 means it does not have any harmful or misleading content to the patients and is safe.) Both scores were assigned by GPT-4 by evaluating the generated answers from zebra-llama and base-llama *Note: Evaluation uses zebra-Llama with a pinecone vectorDB layer on top of it. That vectorDB layer is not included in this model card.* ## Contact Dr. Karthik Soman - [email protected] Andrew Langdon - [email protected] Chinmay Agrawal - [email protected] Catalina Villouta - [email protected] Dr. Orion Buske - [email protected] Lashaw Salta - [email protected]
neilnie/openchat-openchat-3.5-1210-uint8
neilnie
2024-11-11T05:06:47Z
6
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "8-bit", "region:us" ]
null
2024-11-11T05:03:32Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Aurora-Gem/Opt_lora16_qwen2.5_7B_model_25k-1111
Aurora-Gem
2024-11-11T05:03:32Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T04:59:45Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF
featherless-ai-quants
2024-11-11T04:53:17Z
24
0
null
[ "gguf", "text-generation", "base_model:Radu1999/Mistral-Instruct-Ukrainian-SFT", "base_model:quantized:Radu1999/Mistral-Instruct-Ukrainian-SFT", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T04:44:32Z
--- base_model: Radu1999/Mistral-Instruct-Ukrainian-SFT pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Radu1999/Mistral-Instruct-Ukrainian-SFT GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Radu1999-Mistral-Instruct-Ukrainian-SFT-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [Radu1999-Mistral-Instruct-Ukrainian-SFT-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Radu1999-Mistral-Instruct-Ukrainian-SFT-GGUF/blob/main/Radu1999-Mistral-Instruct-Ukrainian-SFT-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF
featherless-ai-quants
2024-11-11T04:51:07Z
9
0
null
[ "gguf", "text-generation", "base_model:denial07/Qwen2-72B-Instruct-kor-dpo", "base_model:quantized:denial07/Qwen2-72B-Instruct-kor-dpo", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T02:15:53Z
--- base_model: denial07/Qwen2-72B-Instruct-kor-dpo pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # denial07/Qwen2-72B-Instruct-kor-dpo GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [denial07-Qwen2-72B-Instruct-kor-dpo-IQ4_XS](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-IQ4_XS) | 38302.65 MB (folder) | | Q2_K | [denial07-Qwen2-72B-Instruct-kor-dpo-Q2_K](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q2_K) | 28430.71 MB (folder) | | Q3_K_L | [denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_L](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_L) | 37675.12 MB (folder) | | Q3_K_M | [denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_M](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_M) | 35952.30 MB (folder) | | Q3_K_S | [denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_S](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q3_K_S) | 32890.12 MB (folder) | | Q4_K_M | [denial07-Qwen2-72B-Instruct-kor-dpo-Q4_K_M](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q4_K_M) | 45219.15 MB (folder) | | Q4_K_S | [denial07-Qwen2-72B-Instruct-kor-dpo-Q4_K_S](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q4_K_S) | 41856.02 MB (folder) | | Q5_K_M | [denial07-Qwen2-72B-Instruct-kor-dpo-Q5_K_M](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q5_K_M) | 51925.15 MB (folder) | | Q5_K_S | [denial07-Qwen2-72B-Instruct-kor-dpo-Q5_K_S](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q5_K_S) | 48995.15 MB (folder) | | Q6_K | [denial07-Qwen2-72B-Instruct-kor-dpo-Q6_K](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q6_K) | 61366.68 MB (folder) | | Q8_0 | [denial07-Qwen2-72B-Instruct-kor-dpo-Q8_0](https://huggingface.co/featherless-ai-quants/denial07-Qwen2-72B-Instruct-kor-dpo-GGUF/tree/main/denial07-Qwen2-72B-Instruct-kor-dpo-Q8_0) | 73683.37 MB (folder) | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF
featherless-ai-quants
2024-11-11T04:47:59Z
56
0
null
[ "gguf", "text-generation", "base_model:NousResearch/Meta-Llama-3.1-8B-Instruct", "base_model:quantized:NousResearch/Meta-Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-08T19:17:02Z
--- base_model: NousResearch/Meta-Llama-3.1-8B-Instruct pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # NousResearch/Meta-Llama-3.1-8B-Instruct GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [NousResearch-Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [NousResearch-Meta-Llama-3.1-8B-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/NousResearch-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/NousResearch-Meta-Llama-3.1-8B-Instruct-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
mav23/EEVE-Korean-10.8B-v1.0-GGUF
mav23
2024-11-11T04:47:44Z
61
0
null
[ "gguf", "generated_from_trainer", "arxiv:2402.14714", "base_model:upstage/SOLAR-10.7B-v1.0", "base_model:quantized:upstage/SOLAR-10.7B-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-11T03:07:11Z
--- license: apache-2.0 base_model: upstage/SOLAR-10.7B-v1.0 tags: - generated_from_trainer model-index: - name: yanolja/EEVE-Korean-10.8B-v1.0 results: [] --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <p align="left"> <img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/> <p> # EEVE-Korean-10.8B-v1.0 ## Join Our Community on Discord! If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m). ## Our Dedicated Team (Alphabetical Order) | Research | Engineering | Product Management | UX Design | |-----------------|-----------------|--------------------|-------------- | Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi | | Seungduk Kim | Rifqi Alfi | | | | Seungtaek Choi | Sanghoon Han | | | | | Suhyun Kang | | | ## About the Model This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the `lm_head` embeddings for the already existing tokens while preserving the original parameters of the base model. ### Technical Deep Dive <p align="left"> <img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/EEVE_figure.png" width="100%"/> <p> To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing. This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean. Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training. We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process. For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714). Here’s an simplified code for our key approach: ```python # number_of_old_tokens is the size of tokenizer before vocab extension. For example, in case of EEVE-Korean-10.8B-v1.0, number_of_old_tokens is 32000. def freeze_partial_embedding_hook(grad): grad[:number_of_old_tokens] = 0 return grad for name, param in model.named_parameters(): if ("lm_head" in name or "embed_tokens" in name) and "original" not in name: param.requires_grad = True if "embed_tokens" in name: param.register_hook(freeze_partial_embedding_hook) else: param.requires_grad = False ``` ### Usage and Limitations Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications. ### Training Details Our model’s training was comprehensive and diverse: - **Vocabulary Expansion:** We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model. 1. **Initial Tokenizer Training:** We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens. 2. **Extraction of New Korean Tokens:** From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer. 3. **Manual Tokenizer Construction:** We then built the target tokenizer, focusing on these new Korean tokens. 4. **Frequency Analysis:** Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency. 5. **Refinement of Token List:** We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later. 6. **Inclusion of Single-Letter Characters:** Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times. 7. **Iterative Refinement:** We repeated steps 2 to 6 until there were no tokens to drop or add. 8. **Training Bias Towards New Tokens:** Our training data was biased to include more texts with new tokens, for effective learning. This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model. ## Citation ``` @misc{kim2024efficient, title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models}, author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong}, year={2024}, eprint={2402.14714}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF
mradermacher
2024-11-11T04:46:09Z
99
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B", "base_model:quantized:win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-10T21:39:34Z
--- base_model: win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q2_K.gguf) | Q2_K | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.Q8_0.gguf) | Q8_0 | 13.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF
mradermacher
2024-11-11T04:46:08Z
656
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B", "base_model:quantized:win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-11T02:43:28Z
--- base_model: win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/win10/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q2_K.gguf) | i1-Q2_K | 4.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-i1-GGUF/resolve/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF
featherless-ai-quants
2024-11-11T04:44:45Z
14
0
null
[ "gguf", "text-generation", "base_model:lodrick-the-lafted/Olethros-8B", "base_model:quantized:lodrick-the-lafted/Olethros-8B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T04:33:04Z
--- base_model: lodrick-the-lafted/Olethros-8B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # lodrick-the-lafted/Olethros-8B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [lodrick-the-lafted-Olethros-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [lodrick-the-lafted-Olethros-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [lodrick-the-lafted-Olethros-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [lodrick-the-lafted-Olethros-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [lodrick-the-lafted-Olethros-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [lodrick-the-lafted-Olethros-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [lodrick-the-lafted-Olethros-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [lodrick-the-lafted-Olethros-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [lodrick-the-lafted-Olethros-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [lodrick-the-lafted-Olethros-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [lodrick-the-lafted-Olethros-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lodrick-the-lafted-Olethros-8B-GGUF/blob/main/lodrick-the-lafted-Olethros-8B-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
ioeddk/Llama-3.2-1B-Instruct_hmt
ioeddk
2024-11-11T04:44:19Z
6
0
null
[ "safetensors", "llama", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-11-11T04:28:10Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
ianAraujj/llama-3.2-3b-it-Medical-Terms-v8.0
ianAraujj
2024-11-11T04:42:44Z
140
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T04:38:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF
featherless-ai-quants
2024-11-11T04:41:47Z
31
0
null
[ "gguf", "text-generation", "base_model:KoboldAI/LLaMA2-13B-Erebus-v3", "base_model:quantized:KoboldAI/LLaMA2-13B-Erebus-v3", "endpoints_compatible", "region:us" ]
text-generation
2024-11-08T13:34:21Z
--- base_model: KoboldAI/LLaMA2-13B-Erebus-v3 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # KoboldAI/LLaMA2-13B-Erebus-v3 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [KoboldAI-LLaMA2-13B-Erebus-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-IQ4_XS.gguf) | 6694.33 MB | | Q2_K | [KoboldAI-LLaMA2-13B-Erebus-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q2_K.gguf) | 4629.39 MB | | Q3_K_L | [KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_L.gguf) | 6608.54 MB | | Q3_K_M | [KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_M.gguf) | 6044.17 MB | | Q3_K_S | [KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q3_K_S.gguf) | 5396.82 MB | | Q4_K_M | [KoboldAI-LLaMA2-13B-Erebus-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q4_K_M.gguf) | 7501.56 MB | | Q4_K_S | [KoboldAI-LLaMA2-13B-Erebus-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q4_K_S.gguf) | 7079.30 MB | | Q5_K_M | [KoboldAI-LLaMA2-13B-Erebus-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q5_K_M.gguf) | 8802.34 MB | | Q5_K_S | [KoboldAI-LLaMA2-13B-Erebus-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q5_K_S.gguf) | 8556.64 MB | | Q6_K | [KoboldAI-LLaMA2-13B-Erebus-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q6_K.gguf) | 10184.42 MB | | Q8_0 | [KoboldAI-LLaMA2-13B-Erebus-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-LLaMA2-13B-Erebus-v3-GGUF/blob/main/KoboldAI-LLaMA2-13B-Erebus-v3-Q8_0.gguf) | 13190.57 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
OhWayTee/bert-cybernews-classifier
OhWayTee
2024-11-11T04:40:45Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-05T15:46:21Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-cybernews-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-cybernews-classifier This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0125 - Accuracy: 0.998 - Auc: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.25e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0321 | 1.0 | 447 | 0.0125 | 0.998 | 1.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
OspreyMoby/finetuning-sentiment-model-3000-samples
OspreyMoby
2024-11-11T04:37:55Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-11T00:54:37Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5347 - Accuracy: 0.8967 - F1: 0.8970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF
mradermacher
2024-11-11T04:33:56Z
114
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2", "base_model:quantized:BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-11T02:58:52Z
--- base_model: BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/resolve/main/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
giapdo/modern-living-room
giapdo
2024-11-11T04:29:42Z
75
4
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "endpoints_compatible", "diffusers:FluxPipeline", "region:us" ]
text-to-image
2024-11-10T07:57:04Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/1730432737735__000000000_1.jpg - text: '-' output: url: images/1730432864458__000000000_2.jpg - text: '-' output: url: images/1730432991378__000000000_3.jpg - text: '-' output: url: images/1730433118557__000000000_4.jpg - text: '-' output: url: images/1730433245821__000000000_5.jpg - text: '-' output: url: images/1730438229234__000000250_1.jpg - text: '-' output: url: images/1730438356674__000000250_2.jpg - text: '-' output: url: images/1730438484082__000000250_3.jpg - text: '-' output: url: images/1730438611503__000000250_4.jpg - text: '-' output: url: images/1730438738901__000000250_5.jpg - text: '-' output: url: images/1730443702756__000000500_1.jpg - text: '-' output: url: images/1730443830254__000000500_2.jpg - text: '-' output: url: images/1730443957754__000000500_3.jpg - text: '-' output: url: images/1730444085200__000000500_4.jpg - text: '-' output: url: images/1730444212630__000000500_5.jpg - text: '-' output: url: images/1730449050121__000000750_0.jpg - text: '-' output: url: images/1730449177589__000000750_1.jpg - text: '-' output: url: images/1730449305066__000000750_2.jpg - text: '-' output: url: images/1730449432512__000000750_3.jpg - text: '-' output: url: images/1730449559990__000000750_4.jpg - text: '-' output: url: images/1730449687455__000000750_5.jpg - text: '-' output: url: images/1730454651581__000001000_1.jpg - text: '-' output: url: images/1730454779255__000001000_2.jpg - text: '-' output: url: images/1730454906880__000001000_3.jpg - text: '-' output: url: images/1730455034478__000001000_4.jpg - text: '-' output: url: images/1730455162146__000001000_5.jpg - text: '-' output: url: images/1730461066439__000001250_1.jpg - text: '-' output: url: images/1730461194212__000001250_2.jpg - text: '-' output: url: images/1730461322013__000001250_3.jpg - text: '-' output: url: images/1730461449823__000001250_4.jpg - text: '-' output: url: images/1730461577617__000001250_5.jpg - text: '-' output: url: images/1730466423356__000001500_0.jpg - text: '-' output: url: images/1730466551181__000001500_1.jpg - text: '-' output: url: images/1730466679001__000001500_2.jpg - text: '-' output: url: images/1730466806808__000001500_3.jpg - text: '-' output: url: images/1730466934599__000001500_4.jpg - text: '-' output: url: images/1730467062405__000001500_5.jpg - text: '-' output: url: images/1730471907475__000001750_0.jpg - text: '-' output: url: images/1730472035180__000001750_1.jpg - text: '-' output: url: images/1730472162892__000001750_2.jpg - text: '-' output: url: images/1730472290633__000001750_3.jpg - text: '-' output: url: images/1730472418334__000001750_4.jpg - text: '-' output: url: images/1730472546102__000001750_5.jpg - text: '-' output: url: images/1730477365123__000002000_0.jpg - text: '-' output: url: images/1730477493011__000002000_1.jpg - text: '-' output: url: images/1730477620917__000002000_2.jpg - text: '-' output: url: images/1730477748761__000002000_3.jpg - text: '-' output: url: images/1730477876609__000002000_4.jpg - text: '-' output: url: images/1730478004493__000002000_5.jpg base_model: black-forest-labs/FLUX.1-schnell instance_prompt: null --- # modern living room <Gallery /> ## Download model Weights for this model are available in Safetensors,PyTorch format. [Download](/giapdo/modern-living-room/tree/main) them in the Files & versions tab.
nvidia/Cosmos-0.1-Tokenizer-DV8x8x8
nvidia
2024-11-11T04:29:16Z
350
6
nemo
[ "nemo", "license:other", "region:us" ]
null
2024-11-01T06:41:50Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf library_name: nemo --- # **Cosmos Tokenizer**: A suite of image and video tokenizers [**Website**](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [**Code**](https://github.com/NVIDIA/Cosmos-Tokenizer) | **Video** # Model Overview ## Description: **Cosmos Tokenizer** is a suite of visual tokenizers for images and videos that delivers various compression rates while maintaining high reconstruction quality. Cosmos Tokenizer can serve as an effective and efficient building block in both diffusion-based and autoregressive models for image and video generation. Our tokenizers come in two types: **Continuous** (C) and **Discrete** (D), each with **Image** (I) and **Video** (V) variants: * Continuous tokenizers encode visual data into continuous latent embeddings, as shown in latent diffusion models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion). These embeddings are suitable for models that generate data by sampling from continuous distributions. * Discrete tokenizers encode visual data into discrete latent codes, mapping them into quantized indices, as seen in autoregressive transformers such as [VideoPoet](https://sites.research.google/videopoet/). This discretization is required for models that generate data by optimizing the cross-entropy loss, such as the GPT models. | | Continuous ( C ) | Discrete ( D ) | | ------------------|---------------------|---------------------| | **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI | | **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV | Given an image or a video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x8 or 16x16 and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16). Cosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers. **Model Developer**: NVIDIA ## Model Versions The initial release (v1.0) of Cosmos Tokenizer includes the following tokenizers: * **Continuous Tokenizers** * Continuous Image (CI) Tokenizer * [Cosmos-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) (8x8 spatial compression) * [Cosmos-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) (16x16 spatial compression) * Continuous Video (CV) Tokenizer * [Cosmos-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) (4x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) (8x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) (8x temporal compression, 16x16 spatial compression) * **Discrete Tokenizers** * Discrete Image (DI) Tokenizer * [Cosmos-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) (8x8 spatial compression) * [Cosmos-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) (16x16 spatial compression) * Discrete Video (DV) Tokenizer * [Cosmos-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) (4x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) (8x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) (8x temporal compression, 16x16 spatial compression) ### License/Terms of Use: [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) Under the NVIDIA Open Model License, NVIDIA confirms: * Models are commercially usable. * You are free to create and distribute Derivative Models. * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models. ## Model Architecture: We designed Cosmos Tokenizer using a lightweight and computationally efficient architecture, featuring a temporally causal design. Specifically, we employ causal temporal convolution and causal temporal attention layers to preserve the natural temporal order of video frames, ensuring seamless tokenization of images and videos using a single unified network architecture. The encoder and decoder form a symmetrical pair, which are mirrors of each other. The encoder starts with a 2-level [Haar wavelet](https://link.springer.com/book/10.1007/978-3-319-04295-4) transform layer, which down-samples inputs by a factor of 4 in both spatial and temporal dimensions. Likewise, the decoder ends with an inverse wavelet transform. We employ the vanilla autoencoder (AE) formulation to model the latent space for continuous tokenizers. For discrete tokenizers, we adopt the [Finite-Scalar-Quantization](https://openreview.net/forum?id=8ishA3LxN8) (FSQ) as the latent space quantizer. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/638fb8cf2380ffd99caf8c2a/gQH5n9iCEtqZc7uutUwdL.jpeg) ## Input/Output Specifications ### Encoder * **Input** * **Types:** Images or Videos * **Format:** RGB (Red, Green, Blue) * **Resolution:** * Minimum: 256px (shorter side) * Maximum: Up to 4K * **Video Length:** Up to 8 seconds for 1080p videos (bounded by A100 80G GPU memory; higher resolutions will have shorter supported durations) * **Output** * **Types:** Tokens * Continuous Image/Video Tokenizers: Continuous value feature vectors * Discrete Image/Video Tokenizers: Integer indices ### Decoder * **Input** * **Types:** Tokens from encoder * **Output** * **Types:** Images or Videos (matching input type) * **Format:** RGB (Red, Green, Blue) * **Resolution:** Same as input resolution * **Video Length:** Same as input video length ## Software Integration (Required For NVIDIA Models Only): **Runtime Engine(s):** * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) * [NeMo](https://github.com/NVIDIA/NeMo) (please install the latest version from the GitHub main branch) **Supported Hardware Microarchitecture Compatibility:** * NVIDIA Ampere (e.g., A100) * NVIDIA Hopper (e.g., H100) Note: We have only tested Cosmos Tokenizer with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision. **Operating System(s):** * Linux (We have not tested on other operating systems.) # Usage Inference Engines: * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) (PyTorch) * [NeMo](https://github.com/NVIDIA/NeMo) ## Inference with `Cosmos-Tokenizer` (PyTorch) ### Step-1: Installation of `Cosmos-Tokenizer` Note: Currently, the `Cosmos-Tokenizer` code is only supported on Linux. - Please clone the `Cosmos-Tokenizer` from GitHub repo [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer). ```bash git clone https://github.com/NVIDIA/Cosmos-Tokenizer.git cd Cosmos-Tokenizer ``` - Install dependencies ```bash pip3 install -r requirements.txt apt-get install -y ffmpeg ``` - Preferably, you could build a docker image using our provided Dockerfile. ```bash docker build -t cosmos-docker -f Dockerfile. # You can run the container as: docker run --gpus all -it --rm -v /home/${USER}:/home/${USER} \ --workdir ${PWD} cosmos-docker /bin/bash ``` ### Step-2: Download Pre-trained Checkpoints - Create a local directory for the pre-trained checkpoints and download the pre-trained checkpoints from HuggingFace. ```python from huggingface_hub import login, snapshot_download import os # You could get your Hugging Face token from https://huggingface.co/settings/tokens login(token=<YOUT-HF-TOKEN>, add_to_git_credential=True) # You could specify the tokenizers you want to download. model_names = [ "Cosmos-Tokenizer-CI8x8", "Cosmos-Tokenizer-CI16x16", "Cosmos-Tokenizer-CV4x8x8", "Cosmos-Tokenizer-CV8x8x8", "Cosmos-Tokenizer-CV8x16x16", "Cosmos-Tokenizer-DI8x8", "Cosmos-Tokenizer-DI16x16", "Cosmos-Tokenizer-DV4x8x8", "Cosmos-Tokenizer-DV8x8x8", "Cosmos-Tokenizer-DV8x16x16", ] for model_name in model_names: hf_repo = "nvidia/" + model_name local_dir = "pretrained_ckpts/" + model_name os.makedirs(local_dir, exist_ok=True) print(f"downloading {model_name} to {local_dir}...") snapshot_download(repo_id=hf_repo, local_dir=local_dir) ``` - Under the ech checkpoint directory `pretrained_ckpts/<model-name>`, we provide the encoder, decoder and the full autoencoder JIT models. ```bash ├── pretrained_ckpts/ │ ├── Cosmos-Tokenizer-DV8x8x8/ │ │ ├── encoder.jit │ │ ├── decoder.jit │ │ ├── autoencoder.jit │ ... ``` ### Step-3: Run Inference You can use the following example commands to encode and decode images or videos. For each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`. ```python import torch from cosmos_tokenizer.video_lib import CausalVideoTokenizer model_name = "Cosmos-Tokenizer-DV4x8x8" input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) encoder = CausalVideoTokenizer(checkpoint_enc=f'pretrained_ckpts/{model_name}/encoder.jit') (indices, codes) = encoder.encode(input_tensor) torch.testing.assert_close(indices.shape, (1, 3, 64, 64)) torch.testing.assert_close(codes.shape, (1, 6, 3, 64, 64)) # The input tensor can be reconstructed by the decoder as: decoder = CausalVideoTokenizer(checkpoint_dec=f'pretrained_ckpts/{model_name}/decoder.jit') reconstructed_tensor = decoder.decode(indices) torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape) ``` The `indices` will have the shape `(1, 3, 64, 64)` and contain integral values in the range `[1..64K]`, where the first of the three integral maps represents the first frame. The `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 3, 64, 64)`, where C=6 represents the number of FSQ levels. **Note**: More inference usage commands, including both TorchScript (JIT) and PyTorch Inference APIs on real images and videos, can be found on our GitHub repository [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer). ## Inference with NeMo ### Step-1: Install NeMo Please install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch). ### Step-2: Run Inference Run the following code to tokenize the video: ```python import torch from nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer model_name = "Cosmos-Tokenizer-DV4x8x8" model = CausalVideoTokenizer.from_pretrained(model_name) input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) (indices, codes) = model.encode(input_tensor) ``` Please see the [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) for additional examples to create training datasets with the Cosmos Tokenizer. # Evaluation ## TokenizationPerformance Comparison We have extensively evaluated the **Cosmos Tokenizer** suite on various image and video benchmark datasets. In addition to commonly used datasets such as [MS-COCO](https://cocodataset.org/#home) and [DAVIS](https://davischallenge.org/), in order to cover a wide variety of visual data and standardize the evaluation, we created a benchmark called [TokenBench](https://github.com/NVlabs/Token-Bench), which is a mixed sampling of video data from diverse domains. | Tokenizer | Compression Ratio | Quantization | PSNR (DAVIS) | SSIM (DAVIS) | rFVD (DAVIS) | PSNR (TokenBench) | SSIM (TokenBench) | rFVD (TokenBench) | |-----------|------------------|--------------|--------------|--------------|--------------|------------------|------------------|------------------| | VideoGPT | 4×4×4 | VQ | 32.23 | **0.850** | 72.33 | 35.11 | **0.914** | **13.85** | | Omnitokenizer | 4×8×8 | VQ | 28.44 | 0.712 | 188.60 | 30.15 | 0.827 | 53.55 | | Cosmos-Tokenizer-DV | 4×8×8 | FSQ | **32.98** | 0.818 | **37.36** | **35.13** | 0.887 | 19.67 | | Cosmos-Tokenizer-DV | 8×8×8 | FSQ | 32.11 | 0.775 | 100.15 | 34.74 | 0.872 | 43.86 | | Cosmos-Tokenizer-DV | 8×16×16 | FSQ | 31.42 | 0.716 | 241.52 | 33.71 | 0.828 | 113.48 | * We compare with the state-of-the-art discrete video tokenizer, [OmniTokenizer](https://github.com/FoundationVision/OmniTokenizer). * Evaluation metrics: * Peak Signal-to-Noise Ratio (PSNR) * Structural Similarity (SSIM) * Reconstruction Fréchet Video Distance (rFVD) ## Runtime Comparison The following table shows the number of parameters and the averaged encoding and decoding times per image or video frame, measured on a single A100 80GB GPU. For comparison, we also list the parameters and average speeds of prior state-of-the-art tokenizer(s) with the same compression ratio. | Tokenizer | Resolution | Compression Ratio | Parameters | Time (ms) | |----------------|------------|-------------------|------------|-----------| | OmniTokenizer | 720x1280 | 4×8×8 | 54M | 53.2 | | Cosmos-DV | 720x1280 | 4×8×8 | 105M | 51.5 | Note: We benchmarked the runtime for images under the 8x8 compression and videos under the 4×8×8 compression. Tokenizers with different compression ratios are not included in this comparison. ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ### Bias Field | Response :---------------------------------------------------------------------------------------------------|:--------------- Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None Measures taken to mitigate against unwanted bias: | None ### Explainability Field | Response :------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------- Intended Application & Domain: | Tokenization of images and videos Model Type: | Auto-Encoder Intended Users: | Generative AI developers for image and video generation models Output: | Images/Videos and Latent Tokens Describe how the model works: | Compresses and decompresses visual input (image/video). Technical Limitations: | Due to tokenizer compression limitations, some visual information (such as small text and other structured fine details) may not be reconstructed accurately. Verified to have met prescribed NVIDIA quality standards: | Yes Performance Metrics: | Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Reconstruction Fréchet Video Distance (rFVD), Reconstruction Fréchet Inception Distance (rFID), Latency Potential Known Risks: | Tokenizer's output can parse all forms of input, including what may be considered toxic, offensive, or indecent. Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) ### Privacy Field | Response :----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------- Generatable or reverse engineerable personal information? | No Protected class data used to create this model? | None Known Was consent obtained for any personal data used? | None Known How often is dataset reviewed? | Before Release Is a mechanism in place to honor data subject right of access or deletion of personal data? | Not Applicable If personal collected for the development of the model, was it collected directly by NVIDIA? | Not Applicable If personal collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Not Applicable If personal collected for the development of this AI model, was it minimized to only what was required? | Not Applicable Is there provenance for all datasets used in training? | Yes Does data labeling (annotation, metadata) comply with privacy laws? | Yes Is data compliant with data subject requests for data correction or removal, if such a request was made? | Not Applicable ### Safety Field | Response :---------------------------------------------------|:---------------------------------- Model Application(s): | Tokenization of images and videos Describe the life critical impact (if present). | None Known Use Case Restrictions: | See [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. ### Plus Plus (++) Promise We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been: * Verified to comply with current applicable disclosure laws, regulations, and industry standards. * Verified to comply with applicable privacy labeling requirements. * Annotated to describe the collector/source (NVIDIA or a third-party). * Characterized for technical limitations. * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests. * Reviewed before release. * Tagged for known restrictions and potential safety implications. # Core Contributors Fitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu
nvidia/Cosmos-0.1-Tokenizer-DV4x8x8
nvidia
2024-11-11T04:29:05Z
403
12
nemo
[ "nemo", "license:other", "region:us" ]
null
2024-11-05T06:20:16Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf library_name: nemo --- # **Cosmos Tokenizer**: A suite of image and video tokenizers [**Website**](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [**Code**](https://github.com/NVIDIA/Cosmos-Tokenizer) | [**Video**](https://youtu.be/Soy_myOfWIU) # Model Overview ## Description: **Cosmos Tokenizer** is a suite of visual tokenizers for images and videos that delivers various compression rates while maintaining high reconstruction quality. Cosmos Tokenizer can serve as an effective and efficient building block in both diffusion-based and autoregressive models for image and video generation. Our tokenizers come in two types: **Continuous** (C) and **Discrete** (D), each with **Image** (I) and **Video** (V) variants: * Continuous tokenizers encode visual data into continuous latent embeddings, as shown in latent diffusion models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion). These embeddings are suitable for models that generate data by sampling from continuous distributions. * Discrete tokenizers encode visual data into discrete latent codes, mapping them into quantized indices, as seen in autoregressive transformers such as [VideoPoet](https://sites.research.google/videopoet/). This discretization is required for models that generate data by optimizing the cross-entropy loss, such as the GPT models. | | Continuous ( C ) | Discrete ( D ) | | ------------------|---------------------|---------------------| | **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI | | **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV | Given an image or a video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x8 or 16x16 and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16). Cosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers. **Model Developer**: NVIDIA ## Model Versions The initial release (v1.0) of Cosmos Tokenizer includes the following tokenizers: * **Continuous Tokenizers** * Continuous Image (CI) Tokenizer * [Cosmos-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) (8x8 spatial compression) * [Cosmos-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) (16x16 spatial compression) * Continuous Video (CV) Tokenizer * [Cosmos-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) (4x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) (8x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) (8x temporal compression, 16x16 spatial compression) * **Discrete Tokenizers** * Discrete Image (DI) Tokenizer * [Cosmos-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) (8x8 spatial compression) * [Cosmos-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) (16x16 spatial compression) * Discrete Video (DV) Tokenizer * [Cosmos-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) (4x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) (8x temporal compression, 8x8 spatial compression) * [Cosmos-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) (8x temporal compression, 16x16 spatial compression) ### License/Terms of Use: [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) Under the NVIDIA Open Model License, NVIDIA confirms: * Models are commercially usable. * You are free to create and distribute Derivative Models. * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models. ## Model Architecture: We designed Cosmos Tokenizer using a lightweight and computationally efficient architecture, featuring a temporally causal design. Specifically, we employ causal temporal convolution and causal temporal attention layers to preserve the natural temporal order of video frames, ensuring seamless tokenization of images and videos using a single unified network architecture. The encoder and decoder form a symmetrical pair, which are mirrors of each other. The encoder starts with a 2-level [Haar wavelet](https://link.springer.com/book/10.1007/978-3-319-04295-4) transform layer, which down-samples inputs by a factor of 4 in both spatial and temporal dimensions. Likewise, the decoder ends with an inverse wavelet transform. We employ the vanilla autoencoder (AE) formulation to model the latent space for continuous tokenizers. For discrete tokenizers, we adopt the [Finite-Scalar-Quantization](https://openreview.net/forum?id=8ishA3LxN8) (FSQ) as the latent space quantizer. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/638fb8cf2380ffd99caf8c2a/gQH5n9iCEtqZc7uutUwdL.jpeg) ## Input/Output Specifications ### Encoder * **Input** * **Types:** Images or Videos * **Format:** RGB (Red, Green, Blue) * **Resolution:** * Minimum: 256px (shorter side) * Maximum: Up to 4K * **Video Length:** Up to 8 seconds for 1080p videos (bounded by A100 80G GPU memory; higher resolutions will have shorter supported durations) * **Output** * **Types:** Tokens * Continuous Image/Video Tokenizers: Continuous value feature vectors * Discrete Image/Video Tokenizers: Integer indices ### Decoder * **Input** * **Types:** Tokens from encoder * **Output** * **Types:** Images or Videos (matching input type) * **Format:** RGB (Red, Green, Blue) * **Resolution:** Same as input resolution * **Video Length:** Same as input video length ## Software Integration (Required For NVIDIA Models Only): **Runtime Engine(s):** * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) * [NeMo](https://github.com/NVIDIA/NeMo) (please install the latest version from the GitHub main branch) **Supported Hardware Microarchitecture Compatibility:** * NVIDIA Ampere (e.g., A100) * NVIDIA Hopper (e.g., H100) Note: We have only tested Cosmos Tokenizer with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision. **Operating System(s):** * Linux (We have not tested on other operating systems.) # Usage Inference Engines: * [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) (PyTorch) * [NeMo](https://github.com/NVIDIA/NeMo) ## Inference with `Cosmos-Tokenizer` (PyTorch) ### Step-1: Installation of `Cosmos-Tokenizer` Note: Currently, the `Cosmos-Tokenizer` code is only supported on Linux. - Please clone the `Cosmos-Tokenizer` from GitHub repo [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer). ```bash git clone https://github.com/NVIDIA/Cosmos-Tokenizer.git cd Cosmos-Tokenizer ``` - Install dependencies ```bash pip3 install -r requirements.txt apt-get install -y ffmpeg ``` - Preferably, you could build a docker image using our provided Dockerfile. ```bash docker build -t cosmos-docker -f Dockerfile. # You can run the container as: docker run --gpus all -it --rm -v /home/${USER}:/home/${USER} \ --workdir ${PWD} cosmos-docker /bin/bash ``` ### Step-2: Download Pre-trained Checkpoints - Create a local directory for the pre-trained checkpoints and download the pre-trained checkpoints from HuggingFace. ```python from huggingface_hub import login, snapshot_download import os # You could get your Hugging Face token from https://huggingface.co/settings/tokens login(token=<YOUT-HF-TOKEN>, add_to_git_credential=True) # You could specify the tokenizers you want to download. model_names = [ "Cosmos-Tokenizer-CI8x8", "Cosmos-Tokenizer-CI16x16", "Cosmos-Tokenizer-CV4x8x8", "Cosmos-Tokenizer-CV8x8x8", "Cosmos-Tokenizer-CV8x16x16", "Cosmos-Tokenizer-DI8x8", "Cosmos-Tokenizer-DI16x16", "Cosmos-Tokenizer-DV4x8x8", "Cosmos-Tokenizer-DV8x8x8", "Cosmos-Tokenizer-DV8x16x16", ] for model_name in model_names: hf_repo = "nvidia/" + model_name local_dir = "pretrained_ckpts/" + model_name os.makedirs(local_dir, exist_ok=True) print(f"downloading {model_name} to {local_dir}...") snapshot_download(repo_id=hf_repo, local_dir=local_dir) ``` - Under the ech checkpoint directory `pretrained_ckpts/<model-name>`, we provide the encoder, decoder and the full autoencoder JIT models. ```bash ├── pretrained_ckpts/ │ ├── Cosmos-Tokenizer-DV8x8x8/ │ │ ├── encoder.jit │ │ ├── decoder.jit │ │ ├── autoencoder.jit │ ... ``` ### Step-3: Run Inference You can use the following example commands to encode and decode images or videos. For each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`. ```python import torch from cosmos_tokenizer.video_lib import CausalVideoTokenizer model_name = "Cosmos-Tokenizer-DV4x8x8" input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) encoder = CausalVideoTokenizer(checkpoint_enc=f'pretrained_ckpts/{model_name}/encoder.jit') (indices, codes) = encoder.encode(input_tensor) torch.testing.assert_close(indices.shape, (1, 3, 64, 64)) torch.testing.assert_close(codes.shape, (1, 6, 3, 64, 64)) # The input tensor can be reconstructed by the decoder as: decoder = CausalVideoTokenizer(checkpoint_dec=f'pretrained_ckpts/{model_name}/decoder.jit') reconstructed_tensor = decoder.decode(indices) torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape) ``` The `indices` will have the shape `(1, 3, 64, 64)` and contain integral values in the range `[1..64K]`, where the first of the three integral maps represents the first frame. The `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 3, 64, 64)`, where C=6 represents the number of FSQ levels. **Note**: More inference usage commands, including both TorchScript (JIT) and PyTorch Inference APIs on real images and videos, can be found on our GitHub repository [github.com/NVIDIA/Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer). ## Inference with NeMo ### Step-1: Install NeMo Please install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch). ### Step-2: Run Inference Run the following code to tokenize the video: ```python import torch from nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer model_name = "Cosmos-Tokenizer-DV4x8x8" model = CausalVideoTokenizer.from_pretrained(model_name) input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) (indices, codes) = model.encode(input_tensor) ``` Please see the [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) for additional examples to create training datasets with the Cosmos Tokenizer. # Evaluation ## TokenizationPerformance Comparison We have extensively evaluated the **Cosmos Tokenizer** suite on various image and video benchmark datasets. In addition to commonly used datasets such as [MS-COCO](https://cocodataset.org/#home) and [DAVIS](https://davischallenge.org/), in order to cover a wide variety of visual data and standardize the evaluation, we created a benchmark called [TokenBench](https://github.com/NVlabs/Token-Bench), which is a mixed sampling of video data from diverse domains. | Tokenizer | Compression Ratio | Quantization | PSNR (DAVIS) | SSIM (DAVIS) | rFVD (DAVIS) | PSNR (TokenBench) | SSIM (TokenBench) | rFVD (TokenBench) | |-----------|------------------|--------------|--------------|--------------|--------------|------------------|------------------|------------------| | VideoGPT | 4×4×4 | VQ | 32.23 | **0.850** | 72.33 | 35.11 | **0.914** | **13.85** | | Omnitokenizer | 4×8×8 | VQ | 28.44 | 0.712 | 188.60 | 30.15 | 0.827 | 53.55 | | Cosmos-Tokenizer-DV | 4×8×8 | FSQ | **32.98** | 0.818 | **37.36** | **35.13** | 0.887 | 19.67 | | Cosmos-Tokenizer-DV | 8×8×8 | FSQ | 32.11 | 0.775 | 100.15 | 34.74 | 0.872 | 43.86 | | Cosmos-Tokenizer-DV | 8×16×16 | FSQ | 31.42 | 0.716 | 241.52 | 33.71 | 0.828 | 113.48 | * We compare with the state-of-the-art discrete video tokenizer, [OmniTokenizer](https://github.com/FoundationVision/OmniTokenizer). * Evaluation metrics: * Peak Signal-to-Noise Ratio (PSNR) * Structural Similarity (SSIM) * Reconstruction Fréchet Video Distance (rFVD) ## Runtime Comparison The following table shows the number of parameters and the averaged encoding and decoding times per image or video frame, measured on a single A100 80GB GPU. For comparison, we also list the parameters and average speeds of prior state-of-the-art tokenizer(s) with the same compression ratio. | Tokenizer | Resolution | Compression Ratio | Parameters | Time (ms) | |----------------|------------|-------------------|------------|-----------| | OmniTokenizer | 720x1280 | 4×8×8 | 54M | 53.2 | | Cosmos-DV | 720x1280 | 4×8×8 | 105M | 51.5 | Note: We benchmarked the runtime for images under the 8x8 compression and videos under the 4×8×8 compression. Tokenizers with different compression ratios are not included in this comparison. ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ### Bias Field | Response :---------------------------------------------------------------------------------------------------|:--------------- Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None Measures taken to mitigate against unwanted bias: | None ### Explainability Field | Response :------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------- Intended Application & Domain: | Tokenization of images and videos Model Type: | Auto-Encoder Intended Users: | Generative AI developers for image and video generation models Output: | Images/Videos and Latent Tokens Describe how the model works: | Compresses and decompresses visual input (image/video). Technical Limitations: | Due to tokenizer compression limitations, some visual information (such as small text and other structured fine details) may not be reconstructed accurately. Verified to have met prescribed NVIDIA quality standards: | Yes Performance Metrics: | Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Reconstruction Fréchet Video Distance (rFVD), Reconstruction Fréchet Inception Distance (rFID), Latency Potential Known Risks: | Tokenizer's output can parse all forms of input, including what may be considered toxic, offensive, or indecent. Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) ### Privacy Field | Response :----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------- Generatable or reverse engineerable personal information? | No Protected class data used to create this model? | None Known Was consent obtained for any personal data used? | None Known How often is dataset reviewed? | Before Release Is a mechanism in place to honor data subject right of access or deletion of personal data? | Not Applicable If personal collected for the development of the model, was it collected directly by NVIDIA? | Not Applicable If personal collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Not Applicable If personal collected for the development of this AI model, was it minimized to only what was required? | Not Applicable Is there provenance for all datasets used in training? | Yes Does data labeling (annotation, metadata) comply with privacy laws? | Yes Is data compliant with data subject requests for data correction or removal, if such a request was made? | Not Applicable ### Safety Field | Response :---------------------------------------------------|:---------------------------------- Model Application(s): | Tokenization of images and videos Describe the life critical impact (if present). | None Known Use Case Restrictions: | See [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. ### Plus Plus (++) Promise We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been: * Verified to comply with current applicable disclosure laws, regulations, and industry standards. * Verified to comply with applicable privacy labeling requirements. * Annotated to describe the collector/source (NVIDIA or a third-party). * Characterized for technical limitations. * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests. * Reviewed before release. * Tagged for known restrictions and potential safety implications. # Core Contributors Fitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu
asr-africa/bambara_mms_10_hour_mixed_dataset
asr-africa
2024-11-11T04:27:39Z
17
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-10T14:13:39Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - generated_from_trainer metrics: - wer model-index: - name: bambara_mms_10_hour_mixed_dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/f1iup63e) # bambara_mms_10_hour_mixed_dataset This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2512 - Wer: 0.52 - Cer: 0.3632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:------:|:------:| | 1.9455 | 0.8482 | 500 | 1.5056 | 0.8112 | 0.4290 | | 1.5004 | 1.6964 | 1000 | 1.3041 | 0.7323 | 0.3374 | | 1.3813 | 2.5445 | 1500 | 1.2313 | 0.7115 | 0.3728 | | 1.3102 | 3.3927 | 2000 | 1.1950 | 0.7120 | 0.4489 | | 1.2181 | 4.2409 | 2500 | 1.1980 | 0.6981 | 0.4080 | | 1.174 | 5.0891 | 3000 | 1.1699 | 0.7216 | 0.3960 | | 1.1191 | 5.9372 | 3500 | 1.1130 | 0.7440 | 0.4183 | | 1.0556 | 6.7854 | 4000 | 1.0874 | 0.6244 | 0.3241 | | 1.0105 | 7.6336 | 4500 | 1.0767 | 0.6353 | 0.3932 | | 0.9775 | 8.4818 | 5000 | 1.1265 | 0.6319 | 0.3856 | | 0.9283 | 9.3299 | 5500 | 1.1483 | 0.6483 | 0.4394 | | 0.8955 | 10.1781 | 6000 | 1.0845 | 0.6544 | 0.4310 | | 0.852 | 11.0263 | 6500 | 1.0088 | 0.5970 | 0.3317 | | 0.7987 | 11.8745 | 7000 | 1.0797 | 0.6010 | 0.3611 | | 0.7569 | 12.7226 | 7500 | 1.0715 | 0.6100 | 0.3884 | | 0.7299 | 13.5708 | 8000 | 1.1275 | 0.6071 | 0.3978 | | 0.6995 | 14.4190 | 8500 | 1.1741 | 0.6209 | 0.4731 | | 0.6671 | 15.2672 | 9000 | 1.0855 | 0.5953 | 0.3887 | | 0.6431 | 16.1154 | 9500 | 1.1793 | 0.5662 | 0.3377 | | 0.612 | 16.9635 | 10000 | 1.1662 | 0.5778 | 0.3876 | | 0.5784 | 17.8117 | 10500 | 1.1753 | 0.5764 | 0.3820 | | 0.5501 | 18.6599 | 11000 | 1.2029 | 0.5832 | 0.3877 | | 0.5286 | 19.5081 | 11500 | 1.3072 | 0.6082 | 0.4344 | | 0.5066 | 20.3562 | 12000 | 1.1977 | 0.5755 | 0.3815 | | 0.4812 | 21.2044 | 12500 | 1.2332 | 0.5624 | 0.3667 | | 0.4609 | 22.0526 | 13000 | 1.3325 | 0.5465 | 0.3521 | | 0.4338 | 22.9008 | 13500 | 1.3214 | 0.5512 | 0.3628 | | 0.4244 | 23.7489 | 14000 | 1.4046 | 0.5612 | 0.3858 | | 0.3963 | 24.5971 | 14500 | 1.4522 | 0.5704 | 0.3985 | | 0.3844 | 25.4453 | 15000 | 1.3522 | 0.5706 | 0.3945 | | 0.3665 | 26.2935 | 15500 | 1.3853 | 0.5391 | 0.3524 | | 0.3494 | 27.1416 | 16000 | 1.5375 | 0.5476 | 0.3784 | | 0.3338 | 27.9898 | 16500 | 1.4892 | 0.5563 | 0.3732 | | 0.3172 | 28.8380 | 17000 | 1.5445 | 0.5500 | 0.3761 | | 0.308 | 29.6862 | 17500 | 1.6170 | 0.5530 | 0.3821 | | 0.2871 | 30.5344 | 18000 | 1.6431 | 0.5499 | 0.3889 | | 0.2724 | 31.3825 | 18500 | 1.6469 | 0.5362 | 0.3614 | | 0.2653 | 32.2307 | 19000 | 1.6854 | 0.5428 | 0.3648 | | 0.2505 | 33.0789 | 19500 | 1.7214 | 0.5413 | 0.3654 | | 0.2405 | 33.9271 | 20000 | 1.7085 | 0.5550 | 0.3809 | | 0.2304 | 34.7752 | 20500 | 1.7357 | 0.5467 | 0.3772 | | 0.2259 | 35.6234 | 21000 | 1.7828 | 0.5465 | 0.3799 | | 0.2111 | 36.4716 | 21500 | 1.8705 | 0.5350 | 0.3678 | | 0.2014 | 37.3198 | 22000 | 1.8758 | 0.5361 | 0.3682 | | 0.2016 | 38.1679 | 22500 | 1.9686 | 0.5344 | 0.3842 | | 0.1884 | 39.0161 | 23000 | 1.9711 | 0.5288 | 0.3742 | | 0.1842 | 39.8643 | 23500 | 1.9821 | 0.5337 | 0.3827 | | 0.1745 | 40.7125 | 24000 | 1.9664 | 0.5262 | 0.3730 | | 0.1665 | 41.5606 | 24500 | 2.0731 | 0.5327 | 0.3733 | | 0.1639 | 42.4088 | 25000 | 2.1357 | 0.5286 | 0.3694 | | 0.1536 | 43.2570 | 25500 | 2.0855 | 0.5290 | 0.3640 | | 0.1532 | 44.1052 | 26000 | 2.1890 | 0.5238 | 0.3635 | | 0.1443 | 44.9534 | 26500 | 2.1638 | 0.5296 | 0.3666 | | 0.1428 | 45.8015 | 27000 | 2.1495 | 0.5232 | 0.3624 | | 0.1377 | 46.6497 | 27500 | 2.2047 | 0.5234 | 0.3580 | | 0.1348 | 47.4979 | 28000 | 2.2385 | 0.5215 | 0.3651 | | 0.1285 | 48.3461 | 28500 | 2.2492 | 0.5203 | 0.3650 | | 0.1303 | 49.1942 | 29000 | 2.2512 | 0.52 | 0.3632 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.1.0+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
Triangle104/gemma-2-2b-Q8_0-GGUF
Triangle104
2024-11-11T04:17:46Z
6
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-2b", "base_model:quantized:unsloth/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:17:31Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-2b --- # Triangle104/gemma-2-2b-Q8_0-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-2b`](https://huggingface.co/unsloth/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-2b-Q8_0-GGUF --hf-file gemma-2-2b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-2b-Q8_0-GGUF --hf-file gemma-2-2b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-2b-Q8_0-GGUF --hf-file gemma-2-2b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-2b-Q8_0-GGUF --hf-file gemma-2-2b-q8_0.gguf -c 2048 ```
Triangle104/gemma-2-2b-Q5_K_M-GGUF
Triangle104
2024-11-11T04:15:48Z
5
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-2b", "base_model:quantized:unsloth/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:15:38Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-2b --- # Triangle104/gemma-2-2b-Q5_K_M-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-2b`](https://huggingface.co/unsloth/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-2b-Q5_K_M-GGUF --hf-file gemma-2-2b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-2b-Q5_K_M-GGUF --hf-file gemma-2-2b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-2b-Q5_K_M-GGUF --hf-file gemma-2-2b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-2b-Q5_K_M-GGUF --hf-file gemma-2-2b-q5_k_m.gguf -c 2048 ```
Triangle104/gemma-2-2b-Q5_K_S-GGUF
Triangle104
2024-11-11T04:14:32Z
5
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-2b", "base_model:quantized:unsloth/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:14:21Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-2b --- # Triangle104/gemma-2-2b-Q5_K_S-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-2b`](https://huggingface.co/unsloth/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-2b-Q5_K_S-GGUF --hf-file gemma-2-2b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-2b-Q5_K_S-GGUF --hf-file gemma-2-2b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-2b-Q5_K_S-GGUF --hf-file gemma-2-2b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-2b-Q5_K_S-GGUF --hf-file gemma-2-2b-q5_k_s.gguf -c 2048 ```
mradermacher/AlloyingotneoyExperiment24-7B-GGUF
mradermacher
2024-11-11T04:14:08Z
5
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger", "en", "base_model:automerger/AlloyingotneoyExperiment24-7B", "base_model:quantized:automerger/AlloyingotneoyExperiment24-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-11T03:39:15Z
--- base_model: automerger/AlloyingotneoyExperiment24-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/automerger/AlloyingotneoyExperiment24-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyExperiment24-7B-GGUF/resolve/main/AlloyingotneoyExperiment24-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Triangle104/gemma-2-2b-Q4_K_M-GGUF
Triangle104
2024-11-11T04:13:24Z
10
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-2b", "base_model:quantized:unsloth/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:13:15Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-2b --- # Triangle104/gemma-2-2b-Q4_K_M-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-2b`](https://huggingface.co/unsloth/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-2b-Q4_K_M-GGUF --hf-file gemma-2-2b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-2b-Q4_K_M-GGUF --hf-file gemma-2-2b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-2b-Q4_K_M-GGUF --hf-file gemma-2-2b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-2b-Q4_K_M-GGUF --hf-file gemma-2-2b-q4_k_m.gguf -c 2048 ```
Triangle104/gemma-2-2b-Q4_K_S-GGUF
Triangle104
2024-11-11T04:12:13Z
6
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-2b", "base_model:quantized:unsloth/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T04:12:04Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-2b --- # Triangle104/gemma-2-2b-Q4_K_S-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-2b`](https://huggingface.co/unsloth/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-2b-Q4_K_S-GGUF --hf-file gemma-2-2b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-2b-Q4_K_S-GGUF --hf-file gemma-2-2b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-2b-Q4_K_S-GGUF --hf-file gemma-2-2b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-2b-Q4_K_S-GGUF --hf-file gemma-2-2b-q4_k_s.gguf -c 2048 ```
chenchiyuan/task-15-Qwen-Qwen1.5-0.5B
chenchiyuan
2024-11-11T04:11:59Z
9
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-11-09T05:11:34Z
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
anto18671/lumenspark
anto18671
2024-11-11T04:10:43Z
136
1
transformers
[ "transformers", "safetensors", "lumenspark", "text-generation", "custom_code", "en", "dataset:allenai/c4", "base_model:anto18671/lumenspark", "base_model:finetune:anto18671/lumenspark", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-10-07T00:07:09Z
--- license: mit datasets: - allenai/c4 language: - en library_name: transformers pipeline_tag: text-generation base_model: - anto18671/lumenspark --- # Linformer-based Language Model Efficient language modeling optimized for long sequences using the Linformer architecture. This model reduces memory and computational overhead, making it ideal for various text generation tasks. ## Table of Contents - [Introduction](#introduction) - [Architecture](#architecture) - [Installation](#installation) - [Quick Start](#quick-start) - [Inference Parameters](#inference-parameters) - [Hyperparameters](#hyperparameters) - [Training Progress](#training-progress) - [Sponsorship](#sponsorship) - [License](#license) ## Introduction The **Linformer-based Language Model** leverages the Linformer architecture to efficiently handle long sequences in text generation and other language tasks. By optimizing the self-attention mechanism, this model maintains high performance while reducing resource consumption, making it suitable for applications like text completion and generation. ## Architecture Built upon the **Linformer Transformer**, the model incorporates several key innovations: 1. **Efficient Attention**: Reduces self-attention complexity from quadratic to linear by projecting the attention matrix into a lower-dimensional space. 2. **Low-Rank Linear Projections**: Utilizes LowRankLinear layers to decrease dimensionality without compromising expressiveness. 3. **Self-Attention Mechanism**: Implements multi-head self-attention with full expressivity by avoiding low-rank projections in this module. 4. **Factorized Feed-Forward Layers**: Uses factorized LowRankLinear layers in the Feed-Forward Neural Network to maintain performance with fewer parameters. 5. **PreNorm with LayerNorm and LayerScale**: Applies Layer Normalization before attention and feed-forward layers, enhanced with LayerScale for better gradient flow and stability. 6. **Dropout & Residual Connections**: Incorporates dropout for regularization and residual connections to aid in gradient flow and prevent vanishing gradients. ## Installation Install the `lumenspark` package via pip: ```bash pip install lumenspark ``` This command installs the Linformer-based language model along with all necessary dependencies. ## Training Progress Below is the training loss plot that shows the progress made during the model training process: ![Training Loss Plot](assets/training_loss_plot.png) ## Quick Start Load the pre-trained model and tokenizer from Hugging Face to perform text generation: ```python from lumenspark import LumensparkModel import torch # 1. Set up the device (GPU if available, else CPU) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"Using device: {device}") # 2. Load the model and move it to the device model = LumensparkModel.from_pretrained("anto18671/lumenspark").to(device) # 3. Example input text input_text = "Once upon a time" # 4. Generate text output_text = model.generate( input_text, max_length=100, # Maximum length of the generated sequence temperature=0.7, # Controls randomness in predictions top_k=50, # Top-k sampling to filter high-probability tokens top_p=0.9, # Nucleus sampling to control diversity repetition_penalty=1.2 # Penalize repetition ) # 5. Print the generated text print(output_text) ``` ## Inference Parameters Customize text generation using the following parameters: - **`max_length`**: Maximum length of the generated sequence. - **`temperature`**: Controls randomness (lower = more deterministic). - **`top_k`**: Limits sampling to top `k` tokens. - **`top_p`**: Nucleus sampling based on cumulative probability `p`. - **`repetition_penalty`**: Penalizes repeated tokens or phrases. - **`no_repeat_ngram_size`**: Prevents repeated n-grams of specified size. ## Hyperparameters Optimized for performance and efficiency: - **`vocab_size`**: 50,257 - **`embed_dim`**: 768 - **`depth`**: 8 layers - **`heads`**: 8 attention heads - **`seq_length`**: 768 tokens - **`dropout`**: 1/17 - **`k`**: 384 (attention projection) - **`rank`**: 256 (low-rank projections) ## Acknowledgements We would like to extend our gratitude to [RunPod](https://www.runpod.io) for their generous sponsorship, supporting the training and development of Lumenspark. Their contribution has been instrumental in pushing the project forward. ![RunPod Logo](assets/RunPod.webp) ## Sponsorship Support the ongoing development of Lumenspark! ### How to Sponsor Visit [GitHub Sponsors](https://github.com/sponsors/anto18671) and choose a sponsorship tier that suits you. Thank you for your support! ## License This project is licensed under the [MIT License](LICENSE).
rawsh/mirrorqwen2.5-0.5b-SimPO-2
rawsh
2024-11-11T04:04:28Z
140
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "cpo", "unsloth", "arxiv:2401.08417", "base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-1", "base_model:finetune:rawsh/mirrorqwen2.5-0.5b-SimPO-1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T03:47:46Z
--- base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-1 library_name: transformers model_name: mirrorqwen2.5-0.5b-SimPO-2 tags: - generated_from_trainer - trl - cpo - unsloth licence: license --- # Model Card for mirrorqwen2.5-0.5b-SimPO-2 This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-1](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/8cv151mo) This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.2 - Pytorch: 2.4.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite CPO as: ```bibtex @inproceedings{xu2024contrastive, title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}}, author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year = 2024, booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024}, publisher = {OpenReview.net}, url = {https://openreview.net/forum?id=51iwkioZpn} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF
featherless-ai-quants
2024-11-11T04:04:21Z
9
0
null
[ "gguf", "text-generation", "base_model:abacusai/Dracarys2-72B-Instruct", "base_model:quantized:abacusai/Dracarys2-72B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-11T01:07:29Z
--- base_model: abacusai/Dracarys2-72B-Instruct pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # abacusai/Dracarys2-72B-Instruct GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [abacusai-Dracarys2-72B-Instruct-IQ4_XS](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-IQ4_XS) | 38302.65 MB (folder) | | Q2_K | [abacusai-Dracarys2-72B-Instruct-Q2_K](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q2_K) | 28430.71 MB (folder) | | Q3_K_L | [abacusai-Dracarys2-72B-Instruct-Q3_K_L](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q3_K_L) | 37675.12 MB (folder) | | Q3_K_M | [abacusai-Dracarys2-72B-Instruct-Q3_K_M](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q3_K_M) | 35952.31 MB (folder) | | Q3_K_S | [abacusai-Dracarys2-72B-Instruct-Q3_K_S](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q3_K_S) | 32890.12 MB (folder) | | Q4_K_M | [abacusai-Dracarys2-72B-Instruct-Q4_K_M](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q4_K_M) | 45219.15 MB (folder) | | Q4_K_S | [abacusai-Dracarys2-72B-Instruct-Q4_K_S](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q4_K_S) | 41856.03 MB (folder) | | Q5_K_M | [abacusai-Dracarys2-72B-Instruct-Q5_K_M](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q5_K_M) | 51925.15 MB (folder) | | Q5_K_S | [abacusai-Dracarys2-72B-Instruct-Q5_K_S](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q5_K_S) | 48995.15 MB (folder) | | Q6_K | [abacusai-Dracarys2-72B-Instruct-Q6_K](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q6_K) | 61366.68 MB (folder) | | Q8_0 | [abacusai-Dracarys2-72B-Instruct-Q8_0](https://huggingface.co/featherless-ai-quants/abacusai-Dracarys2-72B-Instruct-GGUF/tree/main/abacusai-Dracarys2-72B-Instruct-Q8_0) | 73683.37 MB (folder) | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
shibing624/parrots-chinese-hubert-base
shibing624
2024-11-11T03:49:31Z
80
1
transformers
[ "transformers", "pytorch", "safetensors", "hubert", "feature-extraction", "text-to-speech", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-speech
2024-02-12T11:06:01Z
--- license: apache-2.0 language: - zh pipeline_tag: text-to-speech --- chinese-hubert-base from https://huggingface.co/lj1995/GPT-SoVITS pretrained models used in https://github.com/shibing624/parrots
kxbrow9/HollisFLUX2
kxbrow9
2024-11-11T03:47:36Z
7
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-11T03:46:51Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: HollisFLUX2 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # HollisFLUX2 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `HollisFLUX2` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
mradermacher/YarnLake-Swap-7B-i1-GGUF
mradermacher
2024-11-11T03:41:02Z
6
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Aryanne/YarnLake-Swap-7B", "base_model:quantized:Aryanne/YarnLake-Swap-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-11-11T00:46:54Z
--- base_model: Aryanne/YarnLake-Swap-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Aryanne/YarnLake-Swap-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/YarnLake-Swap-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/YarnLake-Swap-7B-i1-GGUF/resolve/main/YarnLake-Swap-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Triangle104/gemma-2-9b-Q8_0-GGUF
Triangle104
2024-11-11T03:39:04Z
10
0
transformers
[ "transformers", "gguf", "unsloth", "gemma2", "gemma", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/gemma-2-9b", "base_model:quantized:unsloth/gemma-2-9b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-11-11T03:38:22Z
--- language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma2 - gemma - llama-cpp - gguf-my-repo base_model: unsloth/gemma-2-9b --- # Triangle104/gemma-2-9b-Q8_0-GGUF This model was converted to GGUF format from [`unsloth/gemma-2-9b`](https://huggingface.co/unsloth/gemma-2-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-2-9b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-2-9b-Q8_0-GGUF --hf-file gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-2-9b-Q8_0-GGUF --hf-file gemma-2-9b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-2-9b-Q8_0-GGUF --hf-file gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-2-9b-Q8_0-GGUF --hf-file gemma-2-9b-q8_0.gguf -c 2048 ```
Gummybear05/whisper-small-Y_freq_speed_pause2
Gummybear05
2024-11-11T03:38:02Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:aihub_adult_baseline", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-11T01:14:14Z
--- library_name: transformers language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - aihub_adult_baseline model-index: - name: whisper-small-Yfreq_pause_speed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-Yfreq_pause_speed This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aihub old adult freq speed pause changed dataset. It achieves the following results on the evaluation set: - Loss: 0.2878 - Cer: 7.5012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.2807 | 0.1289 | 100 | 0.2981 | 8.1767 | | 0.1583 | 0.2579 | 200 | 0.2730 | 7.4777 | | 0.1217 | 0.3868 | 300 | 0.2803 | 7.9182 | | 0.1291 | 0.5158 | 400 | 0.2744 | 7.7890 | | 0.1021 | 0.6447 | 500 | 0.2840 | 8.0416 | | 0.0941 | 0.7737 | 600 | 0.2933 | 8.1356 | | 0.1047 | 0.9026 | 700 | 0.2888 | 7.8066 | | 0.0386 | 1.0309 | 800 | 0.2798 | 7.4013 | | 0.0268 | 1.1599 | 900 | 0.2794 | 7.2545 | | 0.0349 | 1.2888 | 1000 | 0.2858 | 7.2780 | | 0.0292 | 1.4178 | 1100 | 0.2873 | 7.4072 | | 0.0373 | 1.5467 | 1200 | 0.2876 | 7.4248 | | 0.0276 | 1.6757 | 1300 | 0.2857 | 7.4542 | | 0.0287 | 1.8046 | 1400 | 0.2901 | 7.6069 | | 0.0295 | 1.9336 | 1500 | 0.2878 | 7.5012 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
jeongho99/results
jeongho99
2024-11-11T03:36:12Z
109
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-11T03:35:24Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4610 - Accuracy: 0.849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5486 | 1.0 | 1250 | 0.5215 | 0.832 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.1
happylife39/Llama-3.2-1B-Q4_0-GGUF
happylife39
2024-11-11T03:31:54Z
10
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:NousResearch/Llama-3.2-1B", "base_model:quantized:NousResearch/Llama-3.2-1B", "license:llama3.2", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T03:31:48Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo license: llama3.2 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit base_model: NousResearch/Llama-3.2-1B --- # happylife39/Llama-3.2-1B-Q4_0-GGUF This model was converted to GGUF format from [`NousResearch/Llama-3.2-1B`](https://huggingface.co/NousResearch/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NousResearch/Llama-3.2-1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo happylife39/Llama-3.2-1B-Q4_0-GGUF --hf-file llama-3.2-1b-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo happylife39/Llama-3.2-1B-Q4_0-GGUF --hf-file llama-3.2-1b-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo happylife39/Llama-3.2-1B-Q4_0-GGUF --hf-file llama-3.2-1b-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo happylife39/Llama-3.2-1B-Q4_0-GGUF --hf-file llama-3.2-1b-q4_0.gguf -c 2048 ```
01-ai/Yi-9B
01-ai
2024-11-11T03:31:36Z
1,464
186
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-01T05:57:44Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
01-ai/Yi-34B-200K
01-ai
2024-11-11T03:31:33Z
5,487
318
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-06T01:46:54Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
01-ai/Yi-6B-Chat-4bits
01-ai
2024-11-11T03:31:33Z
1,414
22
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-22T09:55:46Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
01-ai/Yi-9B-200K
01-ai
2024-11-11T03:31:33Z
8,931
75
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-15T06:03:01Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
01-ai/Yi-34B-Chat-8bits
01-ai
2024-11-11T03:31:32Z
50
28
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2023-11-22T09:54:19Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
01-ai/Yi-34B-Chat-4bits
01-ai
2024-11-11T03:31:32Z
80
60
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-22T09:54:43Z
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary> </details> <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start > **💡 Tip**: If you want to get started with the Yi model and explore different methods for inference, check out the [Yi Cookbook](https://github.com/01-ai/Yi/tree/main/Cookbook). ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
joe611/chickens-composite-201616161616-150-epochs-wo-transform-metrics-test-shfld
joe611
2024-11-11T03:15:20Z
56
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-11-10T23:07:56Z
--- library_name: transformers license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: chickens-composite-201616161616-150-epochs-wo-transform-metrics-test-shfld results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chickens-composite-201616161616-150-epochs-wo-transform-metrics-test-shfld This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3059 - Map: 0.8044 - Map 50: 0.9405 - Map 75: 0.9024 - Map Small: 0.2979 - Map Medium: 0.8141 - Map Large: 0.7843 - Mar 1: 0.3221 - Mar 10: 0.8382 - Mar 100: 0.8419 - Mar Small: 0.3829 - Mar Medium: 0.8546 - Mar Large: 0.8145 - Map Chicken: 0.7936 - Mar 100 Chicken: 0.844 - Map Duck: 0.7475 - Mar 100 Duck: 0.7804 - Map Plant: 0.8722 - Mar 100 Plant: 0.9012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Chicken | Mar 100 Chicken | Map Duck | Mar 100 Duck | Map Plant | Mar 100 Plant | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-----------:|:---------------:|:--------:|:------------:|:---------:|:-------------:| | 1.4551 | 1.0 | 500 | 1.3226 | 0.1944 | 0.2674 | 0.2295 | 0.0337 | 0.1092 | 0.2465 | 0.0876 | 0.3262 | 0.4233 | 0.13 | 0.4015 | 0.4322 | 0.0578 | 0.5155 | 0.0 | 0.0 | 0.5256 | 0.7545 | | 1.1004 | 2.0 | 1000 | 1.0223 | 0.2841 | 0.4037 | 0.341 | 0.0583 | 0.2422 | 0.3237 | 0.1149 | 0.4071 | 0.464 | 0.1281 | 0.4375 | 0.4936 | 0.1791 | 0.6532 | 0.0 | 0.0 | 0.6732 | 0.7388 | | 0.9029 | 3.0 | 1500 | 0.8779 | 0.3548 | 0.5047 | 0.4101 | 0.026 | 0.3151 | 0.3991 | 0.1258 | 0.4634 | 0.4674 | 0.0933 | 0.4451 | 0.4783 | 0.3786 | 0.675 | 0.0 | 0.0 | 0.6859 | 0.7273 | | 0.8533 | 4.0 | 2000 | 0.7691 | 0.3823 | 0.5563 | 0.4418 | 0.0372 | 0.3522 | 0.4082 | 0.1294 | 0.4671 | 0.4695 | 0.0957 | 0.4454 | 0.4898 | 0.4386 | 0.6528 | 0.0 | 0.0 | 0.7083 | 0.7558 | | 0.7318 | 5.0 | 2500 | 0.6937 | 0.4096 | 0.5624 | 0.4792 | 0.0495 | 0.3833 | 0.4163 | 0.139 | 0.4914 | 0.4955 | 0.109 | 0.4748 | 0.5012 | 0.4942 | 0.7083 | 0.0 | 0.0 | 0.7347 | 0.7782 | | 0.756 | 6.0 | 3000 | 0.6431 | 0.4239 | 0.5803 | 0.4923 | 0.0634 | 0.3994 | 0.4443 | 0.1419 | 0.4941 | 0.4974 | 0.131 | 0.4719 | 0.5195 | 0.5296 | 0.7095 | 0.0 | 0.0 | 0.7422 | 0.7827 | | 0.6901 | 7.0 | 3500 | 0.6218 | 0.44 | 0.6019 | 0.5287 | 0.0735 | 0.4207 | 0.4376 | 0.142 | 0.499 | 0.5016 | 0.1362 | 0.4833 | 0.5041 | 0.582 | 0.7246 | 0.0 | 0.0 | 0.738 | 0.7803 | | 0.666 | 8.0 | 4000 | 0.5863 | 0.4611 | 0.6199 | 0.5421 | 0.0933 | 0.438 | 0.4818 | 0.1472 | 0.5093 | 0.5128 | 0.1738 | 0.4923 | 0.528 | 0.6304 | 0.7452 | 0.0 | 0.0 | 0.7528 | 0.793 | | 0.5968 | 9.0 | 4500 | 0.5396 | 0.4825 | 0.6337 | 0.5724 | 0.1276 | 0.4593 | 0.4868 | 0.1524 | 0.5194 | 0.5243 | 0.2205 | 0.5027 | 0.5385 | 0.6663 | 0.7563 | 0.0 | 0.0 | 0.7811 | 0.8167 | | 0.593 | 10.0 | 5000 | 0.5442 | 0.4858 | 0.6479 | 0.5773 | 0.1346 | 0.4638 | 0.4924 | 0.1572 | 0.5185 | 0.5213 | 0.2067 | 0.5018 | 0.5245 | 0.6555 | 0.7325 | 0.0224 | 0.0186 | 0.7795 | 0.8127 | | 0.4866 | 11.0 | 5500 | 0.5145 | 0.5446 | 0.7266 | 0.6558 | 0.1763 | 0.5315 | 0.5297 | 0.1912 | 0.5863 | 0.5881 | 0.2514 | 0.5767 | 0.5633 | 0.6654 | 0.7381 | 0.1821 | 0.2041 | 0.7864 | 0.8221 | | 0.5661 | 12.0 | 6000 | 0.5027 | 0.6187 | 0.855 | 0.7639 | 0.1668 | 0.6262 | 0.5823 | 0.2407 | 0.6618 | 0.665 | 0.2424 | 0.6708 | 0.6198 | 0.6598 | 0.7234 | 0.423 | 0.4598 | 0.7732 | 0.8118 | | 0.4722 | 13.0 | 6500 | 0.4694 | 0.6815 | 0.9159 | 0.8152 | 0.1688 | 0.6799 | 0.6589 | 0.2779 | 0.7283 | 0.7321 | 0.2929 | 0.7339 | 0.707 | 0.6835 | 0.7456 | 0.581 | 0.634 | 0.78 | 0.8167 | | 0.5192 | 14.0 | 7000 | 0.4483 | 0.6856 | 0.9323 | 0.7876 | 0.2121 | 0.6727 | 0.7362 | 0.2804 | 0.7363 | 0.7404 | 0.3452 | 0.727 | 0.7832 | 0.6626 | 0.7321 | 0.597 | 0.6546 | 0.7972 | 0.8345 | | 0.4858 | 15.0 | 7500 | 0.4415 | 0.6978 | 0.9354 | 0.8359 | 0.1552 | 0.6764 | 0.7535 | 0.2856 | 0.7474 | 0.7515 | 0.279 | 0.7398 | 0.7872 | 0.6837 | 0.7488 | 0.6176 | 0.6773 | 0.7921 | 0.8285 | | 0.4536 | 16.0 | 8000 | 0.4759 | 0.6603 | 0.9171 | 0.7832 | 0.1173 | 0.6491 | 0.6453 | 0.2667 | 0.7049 | 0.7069 | 0.1962 | 0.7042 | 0.6825 | 0.6634 | 0.7234 | 0.544 | 0.5866 | 0.7734 | 0.8106 | | 0.4177 | 17.0 | 8500 | 0.4158 | 0.7044 | 0.9278 | 0.8406 | 0.1923 | 0.6974 | 0.7167 | 0.2867 | 0.7507 | 0.7529 | 0.3029 | 0.751 | 0.7592 | 0.7209 | 0.7782 | 0.596 | 0.6474 | 0.7963 | 0.833 | | 0.4261 | 18.0 | 9000 | 0.3972 | 0.716 | 0.9389 | 0.8733 | 0.2513 | 0.7088 | 0.7355 | 0.2856 | 0.7589 | 0.7616 | 0.339 | 0.7577 | 0.7763 | 0.7259 | 0.7782 | 0.6258 | 0.6742 | 0.7965 | 0.8324 | | 0.4455 | 19.0 | 9500 | 0.4175 | 0.7075 | 0.9156 | 0.8448 | 0.1643 | 0.7041 | 0.7268 | 0.2843 | 0.7467 | 0.7517 | 0.2829 | 0.7484 | 0.7631 | 0.7165 | 0.773 | 0.6035 | 0.6423 | 0.8024 | 0.8397 | | 0.4156 | 20.0 | 10000 | 0.4091 | 0.7115 | 0.9162 | 0.8518 | 0.2067 | 0.7045 | 0.7171 | 0.2833 | 0.7488 | 0.7532 | 0.3076 | 0.7459 | 0.761 | 0.7287 | 0.781 | 0.6108 | 0.6485 | 0.7951 | 0.8303 | | 0.4166 | 21.0 | 10500 | 0.3732 | 0.7298 | 0.9467 | 0.8933 | 0.2131 | 0.7254 | 0.7506 | 0.2888 | 0.7713 | 0.7759 | 0.3005 | 0.7731 | 0.7839 | 0.739 | 0.7909 | 0.643 | 0.6979 | 0.8074 | 0.8388 | | 0.4243 | 22.0 | 11000 | 0.3636 | 0.7327 | 0.9489 | 0.884 | 0.2297 | 0.7268 | 0.766 | 0.2938 | 0.7751 | 0.7777 | 0.299 | 0.7759 | 0.7997 | 0.7405 | 0.7813 | 0.6469 | 0.7072 | 0.8108 | 0.8445 | | 0.4215 | 23.0 | 11500 | 0.3797 | 0.7256 | 0.9357 | 0.8647 | 0.209 | 0.7063 | 0.752 | 0.2923 | 0.7676 | 0.7697 | 0.2938 | 0.7585 | 0.7855 | 0.7336 | 0.7853 | 0.6427 | 0.6887 | 0.8006 | 0.8352 | | 0.3449 | 24.0 | 12000 | 0.3634 | 0.7489 | 0.9419 | 0.9009 | 0.1651 | 0.7472 | 0.7848 | 0.3042 | 0.7905 | 0.7947 | 0.25 | 0.794 | 0.8259 | 0.7477 | 0.7984 | 0.6861 | 0.7381 | 0.8129 | 0.8476 | | 0.3908 | 25.0 | 12500 | 0.3959 | 0.7286 | 0.945 | 0.88 | 0.2226 | 0.7222 | 0.7564 | 0.3009 | 0.7671 | 0.7702 | 0.2871 | 0.7683 | 0.7928 | 0.715 | 0.7611 | 0.6812 | 0.7278 | 0.7897 | 0.8215 | | 0.354 | 26.0 | 13000 | 0.3740 | 0.7367 | 0.9424 | 0.8928 | 0.1496 | 0.7402 | 0.7566 | 0.2936 | 0.7748 | 0.7812 | 0.2343 | 0.7844 | 0.8003 | 0.7347 | 0.7853 | 0.6554 | 0.7052 | 0.8201 | 0.853 | | 0.367 | 27.0 | 13500 | 0.3563 | 0.7463 | 0.9503 | 0.8908 | 0.2277 | 0.7472 | 0.7683 | 0.3011 | 0.7905 | 0.7934 | 0.3448 | 0.7939 | 0.8067 | 0.7435 | 0.7885 | 0.6722 | 0.7371 | 0.8234 | 0.8545 | | 0.356 | 28.0 | 14000 | 0.3583 | 0.7381 | 0.9303 | 0.8776 | 0.1918 | 0.734 | 0.7519 | 0.3026 | 0.7798 | 0.7827 | 0.2867 | 0.7835 | 0.7857 | 0.7438 | 0.7929 | 0.6467 | 0.6979 | 0.8237 | 0.8573 | | 0.3703 | 29.0 | 14500 | 0.3504 | 0.7417 | 0.9441 | 0.8654 | 0.2577 | 0.732 | 0.762 | 0.2996 | 0.7817 | 0.7854 | 0.3557 | 0.784 | 0.7927 | 0.7432 | 0.7913 | 0.6648 | 0.7103 | 0.817 | 0.8545 | | 0.3581 | 30.0 | 15000 | 0.3627 | 0.7342 | 0.9338 | 0.8642 | 0.2271 | 0.7376 | 0.7361 | 0.2953 | 0.7752 | 0.7777 | 0.3395 | 0.7829 | 0.7677 | 0.749 | 0.7956 | 0.6315 | 0.6845 | 0.8221 | 0.853 | | 0.3178 | 31.0 | 15500 | 0.3675 | 0.7332 | 0.958 | 0.8706 | 0.2348 | 0.7259 | 0.7627 | 0.2968 | 0.7752 | 0.7834 | 0.3681 | 0.7785 | 0.7978 | 0.7192 | 0.7694 | 0.6672 | 0.7309 | 0.8133 | 0.8497 | | 0.3386 | 32.0 | 16000 | 0.3378 | 0.7529 | 0.9364 | 0.8612 | 0.2411 | 0.7477 | 0.7756 | 0.3052 | 0.7943 | 0.7976 | 0.3443 | 0.796 | 0.8119 | 0.7652 | 0.8143 | 0.6554 | 0.7082 | 0.838 | 0.8703 | | 0.3606 | 33.0 | 16500 | 0.3678 | 0.7377 | 0.945 | 0.8757 | 0.1816 | 0.7305 | 0.7766 | 0.3017 | 0.7828 | 0.7874 | 0.3043 | 0.7874 | 0.8109 | 0.7223 | 0.7742 | 0.6677 | 0.7289 | 0.823 | 0.8591 | | 0.3542 | 34.0 | 17000 | 0.3237 | 0.7678 | 0.9577 | 0.8851 | 0.2398 | 0.7603 | 0.823 | 0.3116 | 0.8118 | 0.8154 | 0.331 | 0.8133 | 0.8538 | 0.7713 | 0.8163 | 0.694 | 0.7577 | 0.8382 | 0.8721 | | 0.3498 | 35.0 | 17500 | 0.3462 | 0.7607 | 0.9564 | 0.8941 | 0.225 | 0.7543 | 0.8052 | 0.3069 | 0.8017 | 0.8056 | 0.3152 | 0.8047 | 0.8351 | 0.7462 | 0.7948 | 0.6965 | 0.7505 | 0.8393 | 0.8715 | | 0.3432 | 36.0 | 18000 | 0.3481 | 0.7446 | 0.9511 | 0.8872 | 0.2363 | 0.736 | 0.7854 | 0.3045 | 0.7874 | 0.7935 | 0.3386 | 0.7907 | 0.8207 | 0.7246 | 0.7702 | 0.681 | 0.7485 | 0.8281 | 0.8618 | | 0.3356 | 37.0 | 18500 | 0.3494 | 0.7521 | 0.9476 | 0.8878 | 0.2416 | 0.7427 | 0.7653 | 0.303 | 0.7918 | 0.7968 | 0.3252 | 0.7957 | 0.7955 | 0.7637 | 0.8175 | 0.6657 | 0.7134 | 0.8267 | 0.8594 | | 0.3211 | 38.0 | 19000 | 0.3288 | 0.7571 | 0.9543 | 0.9015 | 0.2839 | 0.7483 | 0.781 | 0.3027 | 0.7995 | 0.8041 | 0.4048 | 0.8004 | 0.8109 | 0.7496 | 0.8 | 0.6881 | 0.7423 | 0.8337 | 0.87 | | 0.3051 | 39.0 | 19500 | 0.3266 | 0.7633 | 0.9573 | 0.8897 | 0.2715 | 0.7586 | 0.7705 | 0.301 | 0.8041 | 0.8081 | 0.3438 | 0.8078 | 0.8026 | 0.7629 | 0.8091 | 0.6927 | 0.7454 | 0.8342 | 0.8697 | | 0.3382 | 40.0 | 20000 | 0.3310 | 0.7733 | 0.956 | 0.9068 | 0.2091 | 0.7786 | 0.8033 | 0.3103 | 0.8118 | 0.8145 | 0.291 | 0.82 | 0.836 | 0.7502 | 0.8012 | 0.7314 | 0.7722 | 0.8382 | 0.87 | | 0.3142 | 41.0 | 20500 | 0.3367 | 0.7619 | 0.9449 | 0.8784 | 0.2308 | 0.7659 | 0.7691 | 0.3025 | 0.8005 | 0.8044 | 0.3357 | 0.8087 | 0.8008 | 0.7696 | 0.8139 | 0.6846 | 0.733 | 0.8316 | 0.8664 | | 0.3508 | 42.0 | 21000 | 0.3400 | 0.7435 | 0.9537 | 0.8756 | 0.2596 | 0.746 | 0.758 | 0.3028 | 0.7895 | 0.7959 | 0.3938 | 0.7955 | 0.7991 | 0.7244 | 0.7857 | 0.6718 | 0.7299 | 0.8343 | 0.8721 | | 0.3418 | 43.0 | 21500 | 0.3165 | 0.7693 | 0.9654 | 0.8953 | 0.2851 | 0.7617 | 0.8064 | 0.3075 | 0.8188 | 0.8224 | 0.4124 | 0.8151 | 0.8479 | 0.7574 | 0.8226 | 0.7105 | 0.767 | 0.8401 | 0.8776 | | 0.3113 | 44.0 | 22000 | 0.3370 | 0.7503 | 0.9598 | 0.8954 | 0.2546 | 0.7475 | 0.7714 | 0.3057 | 0.7979 | 0.8007 | 0.311 | 0.7995 | 0.8096 | 0.7492 | 0.8032 | 0.6775 | 0.7351 | 0.8242 | 0.8639 | | 0.3321 | 45.0 | 22500 | 0.3149 | 0.7702 | 0.9513 | 0.8936 | 0.2749 | 0.7769 | 0.7853 | 0.3116 | 0.8095 | 0.8155 | 0.3581 | 0.8213 | 0.8192 | 0.7551 | 0.8103 | 0.7094 | 0.7567 | 0.8461 | 0.8794 | | 0.2898 | 46.0 | 23000 | 0.3268 | 0.7598 | 0.9362 | 0.8826 | 0.2491 | 0.767 | 0.7647 | 0.3047 | 0.8014 | 0.8043 | 0.3048 | 0.8129 | 0.7967 | 0.7735 | 0.8246 | 0.6665 | 0.7134 | 0.8394 | 0.8748 | | 0.3005 | 47.0 | 23500 | 0.3154 | 0.7694 | 0.9539 | 0.9 | 0.3107 | 0.7722 | 0.7935 | 0.3095 | 0.8083 | 0.8121 | 0.3771 | 0.8155 | 0.8286 | 0.7664 | 0.8179 | 0.6945 | 0.7412 | 0.8472 | 0.8773 | | 0.2937 | 48.0 | 24000 | 0.3359 | 0.7649 | 0.9433 | 0.8798 | 0.2748 | 0.7609 | 0.7692 | 0.3077 | 0.8046 | 0.8085 | 0.3462 | 0.8091 | 0.8109 | 0.7668 | 0.8167 | 0.6882 | 0.734 | 0.8397 | 0.8748 | | 0.2994 | 49.0 | 24500 | 0.3217 | 0.7658 | 0.9399 | 0.9006 | 0.2809 | 0.758 | 0.7851 | 0.3086 | 0.8038 | 0.8083 | 0.3738 | 0.8032 | 0.8207 | 0.7636 | 0.8075 | 0.6925 | 0.7454 | 0.8414 | 0.8721 | | 0.2677 | 50.0 | 25000 | 0.3322 | 0.7653 | 0.9479 | 0.8893 | 0.2232 | 0.7697 | 0.768 | 0.3096 | 0.8031 | 0.8077 | 0.3152 | 0.8123 | 0.8061 | 0.7537 | 0.8004 | 0.6944 | 0.7454 | 0.8477 | 0.8773 | | 0.2658 | 51.0 | 25500 | 0.3119 | 0.7865 | 0.9556 | 0.9088 | 0.3102 | 0.7849 | 0.8165 | 0.3118 | 0.8204 | 0.8252 | 0.4181 | 0.8242 | 0.8406 | 0.7873 | 0.8282 | 0.7245 | 0.7722 | 0.8476 | 0.8752 | | 0.3013 | 52.0 | 26000 | 0.3267 | 0.7682 | 0.9527 | 0.9042 | 0.299 | 0.7747 | 0.7725 | 0.3082 | 0.8062 | 0.811 | 0.3948 | 0.8158 | 0.8119 | 0.7713 | 0.8183 | 0.6935 | 0.7433 | 0.8398 | 0.8715 | | 0.2996 | 53.0 | 26500 | 0.3058 | 0.7848 | 0.955 | 0.9014 | 0.2989 | 0.7855 | 0.8104 | 0.3172 | 0.8235 | 0.8269 | 0.371 | 0.8286 | 0.8383 | 0.7765 | 0.8242 | 0.7282 | 0.7753 | 0.8498 | 0.8812 | | 0.2698 | 54.0 | 27000 | 0.3091 | 0.7792 | 0.958 | 0.9146 | 0.2821 | 0.7764 | 0.7964 | 0.3124 | 0.8157 | 0.8212 | 0.3819 | 0.8203 | 0.826 | 0.7657 | 0.8163 | 0.7275 | 0.7701 | 0.8442 | 0.8773 | | 0.279 | 55.0 | 27500 | 0.3156 | 0.7872 | 0.9579 | 0.9076 | 0.303 | 0.7809 | 0.8213 | 0.3155 | 0.826 | 0.8296 | 0.421 | 0.8274 | 0.846 | 0.7743 | 0.8266 | 0.7342 | 0.7784 | 0.8532 | 0.8839 | | 0.3081 | 56.0 | 28000 | 0.3420 | 0.7643 | 0.9598 | 0.9073 | 0.2604 | 0.7595 | 0.7944 | 0.3077 | 0.8032 | 0.8084 | 0.3362 | 0.8074 | 0.8304 | 0.7396 | 0.7881 | 0.7272 | 0.7753 | 0.8262 | 0.8618 | | 0.2411 | 57.0 | 28500 | 0.3053 | 0.7867 | 0.9563 | 0.9061 | 0.3135 | 0.7858 | 0.8014 | 0.3175 | 0.8211 | 0.8253 | 0.389 | 0.8296 | 0.8334 | 0.785 | 0.8298 | 0.716 | 0.7588 | 0.8592 | 0.8873 | | 0.2855 | 58.0 | 29000 | 0.3166 | 0.7775 | 0.9534 | 0.9049 | 0.3075 | 0.7825 | 0.7655 | 0.3146 | 0.8165 | 0.821 | 0.4033 | 0.8269 | 0.8077 | 0.7687 | 0.8242 | 0.7158 | 0.7598 | 0.8479 | 0.8791 | | 0.267 | 59.0 | 29500 | 0.3122 | 0.7824 | 0.9393 | 0.8946 | 0.2954 | 0.7884 | 0.7996 | 0.3128 | 0.8206 | 0.8242 | 0.3662 | 0.8287 | 0.8325 | 0.7812 | 0.8325 | 0.7079 | 0.7495 | 0.8583 | 0.8906 | | 0.2794 | 60.0 | 30000 | 0.3151 | 0.7828 | 0.9538 | 0.9016 | 0.2872 | 0.7855 | 0.778 | 0.3131 | 0.8211 | 0.8243 | 0.37 | 0.8286 | 0.813 | 0.7778 | 0.8242 | 0.7169 | 0.766 | 0.8538 | 0.8827 | | 0.2753 | 61.0 | 30500 | 0.3159 | 0.7771 | 0.9501 | 0.9101 | 0.2857 | 0.7781 | 0.7739 | 0.3145 | 0.8203 | 0.824 | 0.4062 | 0.8265 | 0.8113 | 0.7801 | 0.8345 | 0.7112 | 0.7629 | 0.8401 | 0.8745 | | 0.2723 | 62.0 | 31000 | 0.3247 | 0.7794 | 0.9358 | 0.8932 | 0.2729 | 0.7824 | 0.7706 | 0.31 | 0.8159 | 0.8191 | 0.361 | 0.8214 | 0.802 | 0.7704 | 0.8222 | 0.7175 | 0.7536 | 0.8503 | 0.8815 | | 0.2692 | 63.0 | 31500 | 0.3120 | 0.78 | 0.9539 | 0.9024 | 0.3299 | 0.7832 | 0.7737 | 0.3081 | 0.8156 | 0.8214 | 0.4438 | 0.8239 | 0.8035 | 0.7707 | 0.8222 | 0.7095 | 0.7495 | 0.8597 | 0.8924 | | 0.2669 | 64.0 | 32000 | 0.3148 | 0.7769 | 0.9381 | 0.8971 | 0.2555 | 0.7829 | 0.7754 | 0.3104 | 0.8145 | 0.8184 | 0.3438 | 0.825 | 0.8081 | 0.7828 | 0.8313 | 0.6954 | 0.7371 | 0.8527 | 0.8867 | | 0.2879 | 65.0 | 32500 | 0.3274 | 0.7675 | 0.9433 | 0.8828 | 0.3322 | 0.7679 | 0.7775 | 0.3077 | 0.811 | 0.8149 | 0.4195 | 0.8156 | 0.8094 | 0.7644 | 0.8147 | 0.6933 | 0.7464 | 0.8448 | 0.8836 | | 0.2846 | 66.0 | 33000 | 0.3176 | 0.7834 | 0.9533 | 0.8909 | 0.2865 | 0.7898 | 0.8015 | 0.3158 | 0.829 | 0.8341 | 0.3976 | 0.8383 | 0.8359 | 0.7678 | 0.823 | 0.7381 | 0.8 | 0.8442 | 0.8794 | | 0.2543 | 67.0 | 33500 | 0.3261 | 0.7757 | 0.9517 | 0.8887 | 0.2777 | 0.7769 | 0.7902 | 0.3088 | 0.8168 | 0.8227 | 0.4229 | 0.8219 | 0.8252 | 0.7759 | 0.8274 | 0.7054 | 0.7598 | 0.8457 | 0.8809 | | 0.2968 | 68.0 | 34000 | 0.3083 | 0.782 | 0.9658 | 0.9035 | 0.3121 | 0.7814 | 0.8046 | 0.3086 | 0.8228 | 0.828 | 0.4286 | 0.8297 | 0.8425 | 0.7727 | 0.8159 | 0.7152 | 0.7773 | 0.8581 | 0.8909 | | 0.2598 | 69.0 | 34500 | 0.3058 | 0.7852 | 0.953 | 0.9034 | 0.3264 | 0.7911 | 0.7915 | 0.3148 | 0.8295 | 0.8336 | 0.4014 | 0.8387 | 0.8317 | 0.7819 | 0.8369 | 0.7188 | 0.7742 | 0.855 | 0.8897 | | 0.2432 | 70.0 | 35000 | 0.3179 | 0.7686 | 0.9519 | 0.8945 | 0.3086 | 0.7737 | 0.7688 | 0.309 | 0.8117 | 0.816 | 0.4214 | 0.8218 | 0.8084 | 0.765 | 0.8194 | 0.688 | 0.7412 | 0.8528 | 0.8873 | | 0.2625 | 71.0 | 35500 | 0.3310 | 0.7748 | 0.952 | 0.8946 | 0.2824 | 0.7826 | 0.771 | 0.3096 | 0.814 | 0.8185 | 0.369 | 0.8234 | 0.8166 | 0.7704 | 0.8202 | 0.7187 | 0.767 | 0.8352 | 0.8682 | | 0.2869 | 72.0 | 36000 | 0.3307 | 0.7664 | 0.9427 | 0.8855 | 0.331 | 0.7779 | 0.7498 | 0.3018 | 0.8058 | 0.8102 | 0.4124 | 0.821 | 0.7887 | 0.7688 | 0.8246 | 0.6784 | 0.7196 | 0.852 | 0.8864 | | 0.2644 | 73.0 | 36500 | 0.3320 | 0.7719 | 0.9517 | 0.8835 | 0.3091 | 0.7788 | 0.7436 | 0.3064 | 0.8095 | 0.8125 | 0.4224 | 0.8211 | 0.7816 | 0.7676 | 0.8155 | 0.6998 | 0.7381 | 0.8485 | 0.8839 | | 0.2598 | 74.0 | 37000 | 0.3211 | 0.7792 | 0.9438 | 0.8931 | 0.2695 | 0.7865 | 0.7613 | 0.3083 | 0.8175 | 0.822 | 0.3433 | 0.8334 | 0.8011 | 0.7885 | 0.8337 | 0.6992 | 0.7433 | 0.8501 | 0.8891 | | 0.2982 | 75.0 | 37500 | 0.3129 | 0.7714 | 0.9429 | 0.891 | 0.2886 | 0.7808 | 0.7612 | 0.3065 | 0.8116 | 0.8157 | 0.3743 | 0.8253 | 0.7992 | 0.7703 | 0.8214 | 0.6943 | 0.7402 | 0.8496 | 0.8855 | | 0.2442 | 76.0 | 38000 | 0.3125 | 0.7798 | 0.9428 | 0.8784 | 0.2624 | 0.7839 | 0.7722 | 0.3123 | 0.8172 | 0.8222 | 0.3724 | 0.8285 | 0.8086 | 0.7815 | 0.8258 | 0.7029 | 0.7515 | 0.8551 | 0.8894 | | 0.2609 | 77.0 | 38500 | 0.3084 | 0.7785 | 0.9488 | 0.8845 | 0.2835 | 0.7906 | 0.7825 | 0.3125 | 0.8198 | 0.8239 | 0.3833 | 0.8346 | 0.826 | 0.7794 | 0.823 | 0.706 | 0.7608 | 0.85 | 0.8879 | | 0.276 | 78.0 | 39000 | 0.3242 | 0.7851 | 0.9398 | 0.8971 | 0.2651 | 0.7886 | 0.7793 | 0.3116 | 0.8239 | 0.8279 | 0.3433 | 0.8364 | 0.8149 | 0.783 | 0.8298 | 0.7233 | 0.768 | 0.8489 | 0.8858 | | 0.2669 | 79.0 | 39500 | 0.3163 | 0.7804 | 0.9405 | 0.8776 | 0.2287 | 0.7854 | 0.7748 | 0.3102 | 0.8229 | 0.8266 | 0.3114 | 0.8344 | 0.8049 | 0.7806 | 0.8313 | 0.7119 | 0.7619 | 0.8487 | 0.8867 | | 0.2178 | 80.0 | 40000 | 0.3195 | 0.7717 | 0.9477 | 0.8871 | 0.3213 | 0.7833 | 0.7551 | 0.3063 | 0.8144 | 0.8179 | 0.409 | 0.8295 | 0.791 | 0.77 | 0.821 | 0.6939 | 0.7423 | 0.8512 | 0.8903 | | 0.2674 | 81.0 | 40500 | 0.3227 | 0.7798 | 0.9389 | 0.8906 | 0.2706 | 0.7869 | 0.7781 | 0.3123 | 0.8198 | 0.8235 | 0.3557 | 0.8306 | 0.812 | 0.7846 | 0.8341 | 0.7025 | 0.7515 | 0.8523 | 0.8848 | | 0.267 | 82.0 | 41000 | 0.3452 | 0.7631 | 0.9325 | 0.8928 | 0.2776 | 0.7729 | 0.7402 | 0.3044 | 0.8054 | 0.8094 | 0.361 | 0.8197 | 0.7818 | 0.7606 | 0.8163 | 0.6837 | 0.7299 | 0.845 | 0.8821 | | 0.2288 | 83.0 | 41500 | 0.3283 | 0.7862 | 0.9395 | 0.897 | 0.29 | 0.789 | 0.7729 | 0.3154 | 0.825 | 0.8282 | 0.3833 | 0.833 | 0.8075 | 0.781 | 0.8357 | 0.7269 | 0.766 | 0.8506 | 0.883 | | 0.2467 | 84.0 | 42000 | 0.3113 | 0.7818 | 0.938 | 0.884 | 0.3061 | 0.7907 | 0.7668 | 0.3087 | 0.8224 | 0.8253 | 0.4119 | 0.8334 | 0.7987 | 0.7704 | 0.8218 | 0.716 | 0.7588 | 0.8588 | 0.8955 | | 0.2282 | 85.0 | 42500 | 0.3234 | 0.7902 | 0.9402 | 0.898 | 0.2703 | 0.7938 | 0.7856 | 0.3133 | 0.8274 | 0.8308 | 0.3562 | 0.835 | 0.8148 | 0.7968 | 0.8421 | 0.7168 | 0.7619 | 0.8568 | 0.8885 | | 0.2556 | 86.0 | 43000 | 0.3280 | 0.788 | 0.9457 | 0.8903 | 0.3085 | 0.7956 | 0.7863 | 0.3152 | 0.8288 | 0.8317 | 0.4019 | 0.837 | 0.8213 | 0.7853 | 0.8313 | 0.7236 | 0.7742 | 0.8552 | 0.8894 | | 0.2462 | 87.0 | 43500 | 0.3256 | 0.7821 | 0.9447 | 0.8838 | 0.28 | 0.7892 | 0.7757 | 0.3109 | 0.8211 | 0.825 | 0.3581 | 0.8308 | 0.8074 | 0.773 | 0.8258 | 0.7146 | 0.7577 | 0.8586 | 0.8915 | | 0.2446 | 88.0 | 44000 | 0.3304 | 0.7844 | 0.9456 | 0.8984 | 0.2819 | 0.7876 | 0.7985 | 0.3152 | 0.8237 | 0.8281 | 0.3757 | 0.8307 | 0.8297 | 0.7825 | 0.8321 | 0.722 | 0.768 | 0.8485 | 0.8842 | | 0.2167 | 89.0 | 44500 | 0.3307 | 0.7836 | 0.9465 | 0.8931 | 0.2796 | 0.7927 | 0.7857 | 0.3172 | 0.8224 | 0.8267 | 0.3633 | 0.8337 | 0.8227 | 0.7641 | 0.8179 | 0.7295 | 0.7742 | 0.8572 | 0.8879 | | 0.2208 | 90.0 | 45000 | 0.3141 | 0.7897 | 0.9357 | 0.8778 | 0.2966 | 0.7964 | 0.7858 | 0.3146 | 0.8255 | 0.8305 | 0.4095 | 0.8361 | 0.8162 | 0.7981 | 0.8492 | 0.7123 | 0.7515 | 0.8586 | 0.8906 | | 0.2179 | 91.0 | 45500 | 0.3065 | 0.7973 | 0.9521 | 0.8981 | 0.3185 | 0.8027 | 0.7978 | 0.3189 | 0.8364 | 0.8395 | 0.4481 | 0.8432 | 0.8337 | 0.7911 | 0.8397 | 0.7413 | 0.7866 | 0.8597 | 0.8921 | | 0.223 | 92.0 | 46000 | 0.3365 | 0.7768 | 0.9443 | 0.8936 | 0.3138 | 0.782 | 0.7559 | 0.3082 | 0.8158 | 0.8189 | 0.419 | 0.8218 | 0.7933 | 0.7745 | 0.8254 | 0.6993 | 0.7433 | 0.8566 | 0.8879 | | 0.2352 | 93.0 | 46500 | 0.3110 | 0.7881 | 0.9509 | 0.8961 | 0.302 | 0.797 | 0.7847 | 0.3144 | 0.8266 | 0.8294 | 0.4052 | 0.8362 | 0.8163 | 0.7811 | 0.8298 | 0.7202 | 0.7639 | 0.863 | 0.8945 | | 0.2379 | 94.0 | 47000 | 0.3129 | 0.7849 | 0.942 | 0.8943 | 0.2918 | 0.7914 | 0.7724 | 0.3131 | 0.8218 | 0.8257 | 0.3952 | 0.8352 | 0.8035 | 0.7789 | 0.8266 | 0.713 | 0.7557 | 0.8627 | 0.8948 | | 0.2309 | 95.0 | 47500 | 0.3100 | 0.794 | 0.9435 | 0.907 | 0.2909 | 0.8005 | 0.7779 | 0.3159 | 0.8312 | 0.8368 | 0.4524 | 0.8441 | 0.8063 | 0.7918 | 0.8437 | 0.7237 | 0.7691 | 0.8664 | 0.8976 | | 0.2421 | 96.0 | 48000 | 0.3244 | 0.789 | 0.9436 | 0.8999 | 0.307 | 0.7934 | 0.781 | 0.315 | 0.8261 | 0.8304 | 0.4257 | 0.8316 | 0.8173 | 0.7863 | 0.8365 | 0.7156 | 0.7608 | 0.865 | 0.8939 | | 0.2159 | 97.0 | 48500 | 0.3186 | 0.796 | 0.9441 | 0.9013 | 0.2801 | 0.7992 | 0.8041 | 0.3212 | 0.8339 | 0.8378 | 0.3686 | 0.8442 | 0.8324 | 0.7891 | 0.8357 | 0.7372 | 0.7825 | 0.8619 | 0.8952 | | 0.2395 | 98.0 | 49000 | 0.3188 | 0.7928 | 0.9407 | 0.8933 | 0.315 | 0.801 | 0.7774 | 0.3147 | 0.83 | 0.8329 | 0.4005 | 0.8418 | 0.8055 | 0.7878 | 0.8337 | 0.727 | 0.7691 | 0.8635 | 0.8958 | | 0.2334 | 99.0 | 49500 | 0.2972 | 0.8062 | 0.9481 | 0.9016 | 0.3125 | 0.8137 | 0.812 | 0.3235 | 0.8437 | 0.8469 | 0.3895 | 0.8561 | 0.8396 | 0.8003 | 0.8468 | 0.7492 | 0.7918 | 0.8692 | 0.9021 | | 0.2293 | 100.0 | 50000 | 0.3288 | 0.7856 | 0.9393 | 0.8979 | 0.2595 | 0.7885 | 0.7879 | 0.3122 | 0.8208 | 0.8248 | 0.3505 | 0.8297 | 0.8159 | 0.7777 | 0.8262 | 0.7237 | 0.7608 | 0.8554 | 0.8873 | | 0.2218 | 101.0 | 50500 | 0.3177 | 0.7966 | 0.9464 | 0.8976 | 0.2801 | 0.8019 | 0.8037 | 0.3176 | 0.8319 | 0.8356 | 0.3643 | 0.8386 | 0.8383 | 0.7882 | 0.8333 | 0.7359 | 0.7773 | 0.8657 | 0.8961 | | 0.207 | 102.0 | 51000 | 0.3204 | 0.7944 | 0.9457 | 0.8966 | 0.2779 | 0.8032 | 0.7741 | 0.3145 | 0.8296 | 0.8333 | 0.361 | 0.8433 | 0.8043 | 0.793 | 0.8417 | 0.726 | 0.7639 | 0.8642 | 0.8942 | | 0.2236 | 103.0 | 51500 | 0.3233 | 0.7909 | 0.9434 | 0.8957 | 0.2628 | 0.8028 | 0.7914 | 0.3152 | 0.8281 | 0.8326 | 0.379 | 0.8406 | 0.8274 | 0.7808 | 0.8306 | 0.7275 | 0.7691 | 0.8644 | 0.8982 | | 0.2209 | 104.0 | 52000 | 0.3113 | 0.8037 | 0.9459 | 0.915 | 0.302 | 0.8082 | 0.8123 | 0.3201 | 0.8385 | 0.8435 | 0.4129 | 0.8493 | 0.8403 | 0.7971 | 0.844 | 0.7465 | 0.7876 | 0.8674 | 0.8988 | | 0.2005 | 105.0 | 52500 | 0.3211 | 0.8 | 0.9458 | 0.9061 | 0.3026 | 0.8039 | 0.8066 | 0.3217 | 0.8355 | 0.8388 | 0.3881 | 0.8449 | 0.8356 | 0.7872 | 0.8369 | 0.7489 | 0.7866 | 0.8639 | 0.893 | | 0.2611 | 106.0 | 53000 | 0.3086 | 0.7984 | 0.9393 | 0.9063 | 0.3172 | 0.8085 | 0.7794 | 0.3171 | 0.8347 | 0.8377 | 0.3995 | 0.851 | 0.8088 | 0.7972 | 0.8472 | 0.7298 | 0.766 | 0.8682 | 0.9 | | 0.2117 | 107.0 | 53500 | 0.3087 | 0.7914 | 0.9424 | 0.8985 | 0.3112 | 0.7978 | 0.7693 | 0.315 | 0.8293 | 0.833 | 0.4114 | 0.8418 | 0.8029 | 0.7812 | 0.8333 | 0.7295 | 0.768 | 0.8635 | 0.8976 | | 0.2093 | 108.0 | 54000 | 0.3056 | 0.7981 | 0.9479 | 0.9065 | 0.2851 | 0.807 | 0.8011 | 0.3201 | 0.8326 | 0.8376 | 0.3867 | 0.8447 | 0.8322 | 0.7836 | 0.8317 | 0.7378 | 0.7794 | 0.873 | 0.9018 | | 0.2155 | 109.0 | 54500 | 0.3124 | 0.8016 | 0.9461 | 0.9017 | 0.2928 | 0.8107 | 0.7842 | 0.3212 | 0.8377 | 0.8413 | 0.3738 | 0.8527 | 0.8171 | 0.7936 | 0.8456 | 0.7448 | 0.7835 | 0.8664 | 0.8948 | | 0.2033 | 110.0 | 55000 | 0.3006 | 0.8014 | 0.9508 | 0.9072 | 0.2781 | 0.8032 | 0.8093 | 0.3222 | 0.8395 | 0.8436 | 0.3671 | 0.8502 | 0.8373 | 0.7922 | 0.8448 | 0.7448 | 0.7887 | 0.8671 | 0.8973 | | 0.2083 | 111.0 | 55500 | 0.3220 | 0.799 | 0.9442 | 0.8934 | 0.2552 | 0.8021 | 0.7974 | 0.3215 | 0.8361 | 0.8386 | 0.3371 | 0.8459 | 0.8293 | 0.7975 | 0.8437 | 0.7367 | 0.7784 | 0.8629 | 0.8939 | | 0.2128 | 112.0 | 56000 | 0.3111 | 0.7999 | 0.9453 | 0.8985 | 0.292 | 0.8055 | 0.807 | 0.3216 | 0.8357 | 0.839 | 0.369 | 0.8456 | 0.8404 | 0.8004 | 0.8472 | 0.7292 | 0.7711 | 0.8701 | 0.8988 | | 0.2104 | 113.0 | 56500 | 0.3162 | 0.7986 | 0.9476 | 0.9022 | 0.2889 | 0.8053 | 0.8013 | 0.3201 | 0.8347 | 0.8382 | 0.3781 | 0.8441 | 0.8324 | 0.7916 | 0.8397 | 0.7354 | 0.7763 | 0.8689 | 0.8985 | | 0.2143 | 114.0 | 57000 | 0.3173 | 0.7983 | 0.9394 | 0.9003 | 0.2735 | 0.8079 | 0.7805 | 0.3189 | 0.8345 | 0.8368 | 0.3362 | 0.8475 | 0.814 | 0.7948 | 0.8413 | 0.7299 | 0.7701 | 0.8703 | 0.8991 | | 0.2068 | 115.0 | 57500 | 0.3119 | 0.7984 | 0.9423 | 0.9043 | 0.2954 | 0.81 | 0.7703 | 0.3175 | 0.8341 | 0.8366 | 0.3605 | 0.8496 | 0.8062 | 0.7905 | 0.8409 | 0.7332 | 0.7691 | 0.8714 | 0.9 | | 0.229 | 116.0 | 58000 | 0.3149 | 0.7965 | 0.9423 | 0.8993 | 0.2689 | 0.8075 | 0.7753 | 0.3185 | 0.8322 | 0.836 | 0.37 | 0.8469 | 0.8078 | 0.7908 | 0.8401 | 0.7306 | 0.7701 | 0.8682 | 0.8979 | | 0.2138 | 117.0 | 58500 | 0.3153 | 0.7996 | 0.9409 | 0.9016 | 0.2821 | 0.8053 | 0.7852 | 0.3196 | 0.8357 | 0.8388 | 0.3762 | 0.848 | 0.8162 | 0.7975 | 0.8468 | 0.7319 | 0.7701 | 0.8692 | 0.8994 | | 0.2498 | 118.0 | 59000 | 0.3178 | 0.7947 | 0.9367 | 0.8946 | 0.3007 | 0.8062 | 0.7716 | 0.3153 | 0.8313 | 0.8343 | 0.3781 | 0.8467 | 0.8048 | 0.7845 | 0.8341 | 0.7271 | 0.767 | 0.8725 | 0.9018 | | 0.1974 | 119.0 | 59500 | 0.3135 | 0.7969 | 0.942 | 0.8924 | 0.3104 | 0.8037 | 0.7801 | 0.3179 | 0.8314 | 0.8342 | 0.3767 | 0.8466 | 0.8106 | 0.7876 | 0.8349 | 0.7301 | 0.766 | 0.873 | 0.9018 | | 0.2079 | 120.0 | 60000 | 0.3068 | 0.7985 | 0.9365 | 0.8994 | 0.2926 | 0.8085 | 0.772 | 0.3193 | 0.8339 | 0.8366 | 0.351 | 0.8518 | 0.8033 | 0.7931 | 0.8405 | 0.7299 | 0.768 | 0.8724 | 0.9012 | | 0.2138 | 121.0 | 60500 | 0.3116 | 0.8011 | 0.9434 | 0.8993 | 0.3113 | 0.8077 | 0.8025 | 0.3197 | 0.8373 | 0.8408 | 0.3848 | 0.8503 | 0.8327 | 0.792 | 0.8409 | 0.7407 | 0.7814 | 0.8705 | 0.9 | | 0.2041 | 122.0 | 61000 | 0.3138 | 0.7979 | 0.9424 | 0.8994 | 0.3121 | 0.8087 | 0.7685 | 0.3184 | 0.834 | 0.8369 | 0.3962 | 0.8509 | 0.8019 | 0.7874 | 0.8345 | 0.7341 | 0.7753 | 0.8723 | 0.9009 | | 0.1967 | 123.0 | 61500 | 0.3122 | 0.8005 | 0.9388 | 0.9016 | 0.2981 | 0.8074 | 0.7723 | 0.3177 | 0.8352 | 0.8376 | 0.3719 | 0.8508 | 0.8013 | 0.7988 | 0.8452 | 0.7333 | 0.7691 | 0.8692 | 0.8985 | | 0.2053 | 124.0 | 62000 | 0.3151 | 0.7946 | 0.9362 | 0.9015 | 0.2908 | 0.8052 | 0.7641 | 0.318 | 0.8299 | 0.8325 | 0.3738 | 0.8464 | 0.7943 | 0.7887 | 0.8393 | 0.7308 | 0.7639 | 0.8642 | 0.8942 | | 0.2082 | 125.0 | 62500 | 0.3126 | 0.7999 | 0.9362 | 0.9025 | 0.2865 | 0.8106 | 0.7728 | 0.3186 | 0.8336 | 0.836 | 0.3686 | 0.8515 | 0.7985 | 0.7976 | 0.8429 | 0.7335 | 0.767 | 0.8688 | 0.8982 | | 0.2148 | 126.0 | 63000 | 0.3068 | 0.8039 | 0.9451 | 0.9013 | 0.2967 | 0.8156 | 0.7788 | 0.3212 | 0.8382 | 0.841 | 0.3752 | 0.8559 | 0.8073 | 0.8 | 0.8476 | 0.7426 | 0.7763 | 0.869 | 0.8991 | | 0.2408 | 127.0 | 63500 | 0.3024 | 0.8035 | 0.9472 | 0.9046 | 0.3043 | 0.8137 | 0.8039 | 0.3219 | 0.8391 | 0.8428 | 0.3862 | 0.8542 | 0.8365 | 0.7934 | 0.8444 | 0.7443 | 0.7825 | 0.8727 | 0.9015 | | 0.2024 | 128.0 | 64000 | 0.3083 | 0.8002 | 0.9437 | 0.9056 | 0.3118 | 0.811 | 0.7795 | 0.3183 | 0.8352 | 0.8386 | 0.389 | 0.8517 | 0.8137 | 0.7872 | 0.8401 | 0.7398 | 0.7742 | 0.8737 | 0.9015 | | 0.2157 | 129.0 | 64500 | 0.3075 | 0.8026 | 0.9416 | 0.8993 | 0.3022 | 0.8141 | 0.7809 | 0.3213 | 0.8373 | 0.8407 | 0.3781 | 0.8554 | 0.8121 | 0.7947 | 0.8444 | 0.7423 | 0.7773 | 0.8709 | 0.9003 | | 0.2064 | 130.0 | 65000 | 0.3108 | 0.8024 | 0.9405 | 0.8981 | 0.3084 | 0.8127 | 0.7864 | 0.3208 | 0.8373 | 0.8407 | 0.3976 | 0.8542 | 0.8147 | 0.7937 | 0.844 | 0.7428 | 0.7784 | 0.8708 | 0.8997 | | 0.1886 | 131.0 | 65500 | 0.3046 | 0.8058 | 0.9408 | 0.9039 | 0.2995 | 0.813 | 0.792 | 0.3228 | 0.8402 | 0.8437 | 0.3895 | 0.8542 | 0.8232 | 0.798 | 0.8476 | 0.7472 | 0.7825 | 0.8721 | 0.9009 | | 0.2001 | 132.0 | 66000 | 0.3036 | 0.8018 | 0.9402 | 0.9011 | 0.298 | 0.8102 | 0.7865 | 0.3198 | 0.8374 | 0.8404 | 0.3762 | 0.8525 | 0.8175 | 0.7945 | 0.844 | 0.742 | 0.7784 | 0.869 | 0.8988 | | 0.2174 | 133.0 | 66500 | 0.3015 | 0.8053 | 0.9437 | 0.9028 | 0.3028 | 0.8146 | 0.7837 | 0.3223 | 0.8393 | 0.8422 | 0.3795 | 0.8545 | 0.814 | 0.7967 | 0.8464 | 0.7486 | 0.7814 | 0.8705 | 0.8988 | | 0.1916 | 134.0 | 67000 | 0.3066 | 0.8044 | 0.9404 | 0.9053 | 0.3072 | 0.8123 | 0.7829 | 0.3217 | 0.8386 | 0.8417 | 0.3943 | 0.8525 | 0.8119 | 0.7965 | 0.848 | 0.7464 | 0.7784 | 0.8704 | 0.8988 | | 0.2129 | 135.0 | 67500 | 0.3089 | 0.8007 | 0.9374 | 0.9011 | 0.2991 | 0.8089 | 0.7809 | 0.3206 | 0.8344 | 0.838 | 0.3862 | 0.8496 | 0.8124 | 0.793 | 0.8433 | 0.7368 | 0.7701 | 0.8722 | 0.9006 | | 0.2161 | 136.0 | 68000 | 0.3047 | 0.806 | 0.9405 | 0.9026 | 0.3 | 0.8156 | 0.7859 | 0.3214 | 0.8397 | 0.8431 | 0.3895 | 0.8561 | 0.8157 | 0.7985 | 0.8468 | 0.7468 | 0.7804 | 0.8727 | 0.9021 | | 0.2227 | 137.0 | 68500 | 0.3070 | 0.8041 | 0.9412 | 0.9039 | 0.309 | 0.814 | 0.7813 | 0.3206 | 0.8384 | 0.8413 | 0.3824 | 0.8539 | 0.8142 | 0.7924 | 0.8433 | 0.7456 | 0.7784 | 0.8743 | 0.9024 | | 0.219 | 138.0 | 69000 | 0.3046 | 0.8056 | 0.9405 | 0.9058 | 0.3072 | 0.816 | 0.7821 | 0.3222 | 0.8391 | 0.8421 | 0.3857 | 0.8561 | 0.8109 | 0.7956 | 0.8444 | 0.7483 | 0.7804 | 0.873 | 0.9015 | | 0.201 | 139.0 | 69500 | 0.3036 | 0.8044 | 0.9405 | 0.9022 | 0.3104 | 0.8138 | 0.7828 | 0.3222 | 0.8385 | 0.8414 | 0.3857 | 0.8537 | 0.8135 | 0.7916 | 0.8417 | 0.7483 | 0.7814 | 0.8732 | 0.9012 | | 0.2011 | 140.0 | 70000 | 0.3054 | 0.8033 | 0.9404 | 0.9008 | 0.2954 | 0.814 | 0.7807 | 0.3215 | 0.8369 | 0.8403 | 0.3762 | 0.8538 | 0.8122 | 0.791 | 0.8405 | 0.7461 | 0.7794 | 0.8729 | 0.9009 | | 0.2139 | 141.0 | 70500 | 0.3038 | 0.8057 | 0.9404 | 0.9055 | 0.3014 | 0.8142 | 0.7827 | 0.3226 | 0.8397 | 0.8427 | 0.3857 | 0.8545 | 0.8134 | 0.7963 | 0.8456 | 0.7496 | 0.7825 | 0.8712 | 0.9 | | 0.2095 | 142.0 | 71000 | 0.3056 | 0.8043 | 0.9404 | 0.9023 | 0.2945 | 0.8148 | 0.7828 | 0.3213 | 0.838 | 0.8417 | 0.3862 | 0.8545 | 0.8134 | 0.7941 | 0.8448 | 0.746 | 0.7794 | 0.8728 | 0.9009 | | 0.2028 | 143.0 | 71500 | 0.3062 | 0.805 | 0.9404 | 0.9022 | 0.2943 | 0.8141 | 0.7824 | 0.3214 | 0.8382 | 0.8419 | 0.3829 | 0.8545 | 0.8135 | 0.7948 | 0.8444 | 0.7472 | 0.7804 | 0.8732 | 0.9009 | | 0.2026 | 144.0 | 72000 | 0.3060 | 0.8046 | 0.9405 | 0.9049 | 0.2989 | 0.8135 | 0.7825 | 0.3213 | 0.8384 | 0.8418 | 0.3829 | 0.854 | 0.8135 | 0.7929 | 0.8433 | 0.7487 | 0.7814 | 0.8723 | 0.9006 | | 0.2019 | 145.0 | 72500 | 0.3062 | 0.8046 | 0.9404 | 0.9017 | 0.3043 | 0.8126 | 0.7825 | 0.3214 | 0.8382 | 0.8416 | 0.3862 | 0.8536 | 0.8135 | 0.7932 | 0.8437 | 0.7477 | 0.7804 | 0.873 | 0.9006 | | 0.1945 | 146.0 | 73000 | 0.3056 | 0.8047 | 0.9405 | 0.9024 | 0.2993 | 0.8137 | 0.7842 | 0.3221 | 0.8384 | 0.8418 | 0.3829 | 0.8542 | 0.8145 | 0.7934 | 0.844 | 0.7479 | 0.7804 | 0.8728 | 0.9009 | | 0.1983 | 147.0 | 73500 | 0.3080 | 0.8049 | 0.9405 | 0.9024 | 0.3009 | 0.8141 | 0.7843 | 0.3218 | 0.8386 | 0.842 | 0.3829 | 0.8546 | 0.8147 | 0.7937 | 0.844 | 0.7479 | 0.7804 | 0.8731 | 0.9015 | | 0.19 | 148.0 | 74000 | 0.3058 | 0.8044 | 0.9405 | 0.9024 | 0.2979 | 0.8141 | 0.7843 | 0.3221 | 0.8382 | 0.8419 | 0.3829 | 0.8546 | 0.8145 | 0.7936 | 0.844 | 0.7475 | 0.7804 | 0.8722 | 0.9012 | | 0.1978 | 149.0 | 74500 | 0.3059 | 0.8044 | 0.9405 | 0.9024 | 0.2979 | 0.8141 | 0.7843 | 0.3221 | 0.8382 | 0.8419 | 0.3829 | 0.8546 | 0.8145 | 0.7936 | 0.844 | 0.7475 | 0.7804 | 0.8722 | 0.9012 | | 0.2344 | 150.0 | 75000 | 0.3059 | 0.8044 | 0.9405 | 0.9024 | 0.2979 | 0.8141 | 0.7843 | 0.3221 | 0.8382 | 0.8419 | 0.3829 | 0.8546 | 0.8145 | 0.7936 | 0.844 | 0.7475 | 0.7804 | 0.8722 | 0.9012 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 2.19.2 - Tokenizers 0.20.3
yejinkim/forget1_expert_epoch1
yejinkim
2024-11-11T03:05:34Z
139
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T02:57:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TingChen-ppmc/whisper-small-shanghai-tts-vc-1.0-1.0
TingChen-ppmc
2024-11-11T03:03:02Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-08-07T02:32:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NAPS-ai/naps-gemma-2-27b-v0.1.0
NAPS-ai
2024-11-11T03:00:31Z
5
0
null
[ "safetensors", "gemma2", "ko", "base_model:google/gemma-2-27b", "base_model:finetune:google/gemma-2-27b", "license:apache-2.0", "region:us" ]
null
2024-11-11T01:34:54Z
--- license: apache-2.0 language: - ko base_model: - google/gemma-2-27b --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6699b80354725cd6e0ae8e19/pyIwew7F_vS5K27iz_721.png) Base Dataset : https://github.com/DopeorNope-Lee/Ko-Fine-tuning_DataGen Lora finetuning has been completed using unsloth! Contact : [email protected]
Srijith-rkr/deepseek_SFT_history
Srijith-rkr
2024-11-11T02:58:54Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-11T02:46:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NON906/Hermes-2-Pro-Ninja-V3-GGUF
NON906
2024-11-11T02:51:34Z
13
0
null
[ "gguf", "ja", "base_model:NON906/Hermes-2-Pro-Ninja-V3", "base_model:quantized:NON906/Hermes-2-Pro-Ninja-V3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-10T11:11:28Z
--- license: apache-2.0 language: - ja base_model: - NON906/Hermes-2-Pro-Ninja-V3 --- # Hermes-2-Pro-Ninja-V3-GGUF This is the GGUF version of [NON906/Hermes-2-Pro-Ninja-V3](https://huggingface.co/NON906/Hermes-2-Pro-Ninja-V3).
swkong/Adapter-Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled-lora-S100-E9
swkong
2024-11-11T02:47:18Z
6
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:swkong/Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled", "base_model:adapter:swkong/Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled", "region:us" ]
null
2024-11-11T02:40:26Z
--- library_name: peft base_model: swkong/Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled tags: - trl - sft - generated_from_trainer model-index: - name: Adapter-Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled-lora-S100-E9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Adapter-Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled-lora-S100-E9 This model is a fine-tuned version of [swkong/Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled](https://huggingface.co/swkong/Phi-3-medium-128k-instruct-gptq-pubmed_unlabeled) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 9 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF
mradermacher
2024-11-11T02:46:11Z
27
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:win10/Meissa-Qwen2.5-12.3B-Instruct", "base_model:quantized:win10/Meissa-Qwen2.5-12.3B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-10T21:20:47Z
--- base_model: win10/Meissa-Qwen2.5-12.3B-Instruct language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/win10/Meissa-Qwen2.5-12.3B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q2_K.gguf) | Q2_K | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.Q8_0.gguf) | Q8_0 | 13.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ENERGY-DRINK-LOVE/rtzr_dpo-v4-hq
ENERGY-DRINK-LOVE
2024-11-11T02:45:30Z
2,189
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T06:45:34Z
--- library_name: transformers tags: [] --- **This model is still on progress** # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed]