modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-16 12:29:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
523 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-16 12:28:25
card
stringlengths
11
1.01M
akshayaithalexp/en_pipeline
akshayaithalexp
2023-11-21T16:56:54Z
0
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2023-11-21T16:56:09Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9831760361 - name: NER Recall type: recall value: 0.9831760361 - name: NER F Score type: f_score value: 0.9831760361 --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.4,<3.6.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `CITY`, `PERSON`, `VERTICAL` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 98.32 | | `ENTS_P` | 98.32 | | `ENTS_R` | 98.32 | | `TOK2VEC_LOSS` | 44425.15 | | `NER_LOSS` | 19354.73 |
gnuevo/distilhubert-finetuned-gtzan-finetuned-gtzan
gnuevo
2023-11-21T16:55:09Z
2
0
transformers
[ "transformers", "safetensors", "hubert", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2023-11-09T10:57:41Z
--- license: apache-2.0 base_model: gnuevo/distilhubert-finetuned-gtzan tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan-finetuned-gtzan This model is a fine-tuned version of [gnuevo/distilhubert-finetuned-gtzan](https://huggingface.co/gnuevo/distilhubert-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.9179 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3025 | 1.0 | 57 | 2.2928 | 0.2 | | 2.2609 | 2.0 | 114 | 2.2683 | 0.27 | | 1.9479 | 3.0 | 171 | 1.9099 | 0.4 | | 1.211 | 4.0 | 228 | 1.5258 | 0.39 | | 0.9834 | 5.0 | 285 | 1.4254 | 0.52 | | 0.6456 | 6.0 | 342 | 1.3216 | 0.57 | | 0.5043 | 7.0 | 399 | 1.1890 | 0.69 | | 0.4696 | 8.0 | 456 | 1.0764 | 0.8 | | 0.3204 | 9.0 | 513 | 0.9564 | 0.82 | | 0.3164 | 10.0 | 570 | 0.9101 | 0.83 | | 0.2334 | 11.0 | 627 | 0.9021 | 0.84 | | 0.217 | 12.0 | 684 | 0.9051 | 0.84 | | 0.1781 | 13.0 | 741 | 0.9118 | 0.84 | | 0.1203 | 14.0 | 798 | 0.9153 | 0.85 | | 0.0639 | 15.0 | 855 | 0.9179 | 0.84 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
mojemai/NameGeneratorDecoder
mojemai
2023-11-21T16:53:48Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-11-21T16:52:06Z
--- license: mit --- # Name Generator This is a Decoder (Transformer) model that generates names.
CoruNethron/neu-sai-it1
CoruNethron
2023-11-21T16:45:33Z
0
0
null
[ "autotrain", "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2023-11-21T14:40:11Z
--- tags: - autotrain - text-generation widget: - text: 'I love AutoTrain because ' license: apache-2.0 --- # Model Trained Using AutoTrain Model trained using 1/15 part of Saiga dataset: https://github.com/IlyaGusev/rulm/blob/e4238fd9a196405b566a2d5838ab44b7a0f4dc31/self_instruct/src/data_processing/create_chat_set.py
AswanthCManoj/azma-llama2-7b-hf-lora-adapter-2
AswanthCManoj
2023-11-21T16:41:23Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-11-21T16:41:06Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
asas-ai/noon_7B_8bit_qlora_mlqa
asas-ai
2023-11-21T16:36:13Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:asas-ai/noon-7B_8bit", "base_model:finetune:asas-ai/noon-7B_8bit", "region:us" ]
null
2023-11-21T16:35:24Z
--- base_model: asas-ai/noon-7B_8bit tags: - generated_from_trainer model-index: - name: noon_7B_8bit_qlora_mlqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # noon_7B_8bit_qlora_mlqa This model is a fine-tuned version of [asas-ai/noon-7B_8bit](https://huggingface.co/asas-ai/noon-7B_8bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 1950 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.4.0 - Tokenizers 0.15.0
SaiedAlshahrani/noon_7B_8bit_qlora_mlqa
SaiedAlshahrani
2023-11-21T16:35:26Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:asas-ai/noon-7B_8bit", "base_model:finetune:asas-ai/noon-7B_8bit", "region:us" ]
null
2023-11-21T13:49:01Z
--- base_model: asas-ai/noon-7B_8bit tags: - generated_from_trainer model-index: - name: noon_7B_8bit_qlora_mlqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # noon_7B_8bit_qlora_mlqa This model is a fine-tuned version of [asas-ai/noon-7B_8bit](https://huggingface.co/asas-ai/noon-7B_8bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 1950 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.4.0 - Tokenizers 0.15.0
fossoescp/morality-analysis
fossoescp
2023-11-21T16:32:01Z
0
0
null
[ "moral", "text-classification", "en", "dataset:USC-MOLA-Lab/MFRC", "arxiv:1910.09700", "license:unknown", "region:us" ]
text-classification
2023-11-21T16:28:52Z
--- license: unknown datasets: - USC-MOLA-Lab/MFRC language: - en pipeline_tag: text-classification tags: - moral --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Akbank/test
Akbank
2023-11-21T16:29:55Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-11-21T16:29:42Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alexandrechiari/alex-model
alexandrechiari
2023-11-21T16:28:45Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-21T14:42:10Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of alex tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - alexandrechiari/alex-model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of alex using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
eunjinii/segformer-b0-scene-parse-150
eunjinii
2023-11-21T16:20:34Z
5
0
transformers
[ "transformers", "safetensors", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2023-11-21T15:34:38Z
--- license: other base_model: nvidia/mit-b0 tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: segformer-b0-scene-parse-150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 3.3175 - Mean Iou: 0.0536 - Mean Accuracy: 0.1241 - Overall Accuracy: 0.3983 - Per Category Iou: [0.28886228584264156, 0.24371287587146617, 0.8069558270192259, 0.4761964588791857, 0.5448661292194517, 0.1137512299406072, 0.0, 0.0, 0.0009700472124042208, 0.0, 0.004326142142338239, 0.009256580850448365, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0036519462973908638, nan, 0.07551971444624385, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.006026594421253421, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] - Per Category Accuracy: [0.36078075062366005, 0.7357816119498363, 0.9711514230439687, 0.7818717815042213, 0.9586573048709611, 0.34005504955641397, 0.0, nan, 0.0009738207962538979, nan, 0.05175275851967581, 0.11026878015161957, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0074408602150537635, nan, 0.6405943369778525, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.006209764712584743, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.697 | 2.0 | 20 | 4.9388 | 0.0146 | 0.0740 | 0.2066 | [0.085571230710775, 0.2964774873114647, 0.556787522438757, 0.07943290508800724, 0.5710624205368254, 7.988314237458347e-05, 0.0, 0.0, 0.003537290193845518, 0.0, 0.012328710639526093, 0.0, 0.0, 0.0, 0.0, 0.0, 0.13723623262995369, 0.0, 0.0018603707211473414, 0.0, 0.02562518292088062, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.007467363199275381, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.034510631609672854, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 1.1162458419842386e-05, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0025575447570332483, nan, 0.0, 0.0, 0.0] | [0.08902916258199237, 0.3715352663950795, 0.8032861246912958, 0.09475176560161727, 0.8866060349025336, 8.269246671628214e-05, 0.0, nan, 0.003636288292394874, nan, 0.04379455131334831, 0.0, nan, 0.0, 0.0, nan, 0.3532489898655362, 0.0, 0.0023655913978494624, nan, 0.2209139332772638, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.007638110833405154, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.04975559224369557, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 2.2303506111160674e-05, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.03361344537815126, nan, 0.0, nan, 0.0] | | 4.0945 | 4.0 | 40 | 4.6344 | 0.0196 | 0.0893 | 0.3038 | [0.20635825082660963, 0.1497617706207396, 0.6430633164341487, 0.20803427211231343, 0.48320529531461826, 0.05435056746532156, 0.0, 0.0, 0.00018992279894875164, 0.0, 0.02522212668386357, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0167847439120388, 0.0, 0.005801286258039113, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.002079884773308683, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0019606068832383194, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.006265614985153501, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0] | [0.24881046314229013, 0.8015616263279814, 0.9432100584260253, 0.25698833183933867, 0.9685636345548696, 0.0712809063094352, 0.0, nan, 0.00019165622053933097, nan, 0.11600429645542427, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.028430107526881722, nan, 0.12391365292963274, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0020845901921916748, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.002104172736395714, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.010683379427245962, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.9633 | 6.0 | 60 | 4.2067 | 0.0266 | 0.1027 | 0.3562 | [0.2871756509311777, 0.17553996035436806, 0.6508065400502357, 0.28472075103957967, 0.5344282513158123, 0.0396569507272742, 0.0, 0.0, 0.01198079221182846, 0.0, 0.02074043347505963, 0.0036676362716580244, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03343392818953021, nan, 0.015774441212692922, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0013235016071090943, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.014526084050792153, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.40999131259514304, 0.7462656761722182, 0.9663556039746756, 0.35285027923322443, 0.9665425757504359, 0.057683902139372245, 0.0, nan, 0.014177380422058078, nan, 0.03905868567522703, 0.13990351481736732, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.060516129032258066, nan, 0.3375385477992711, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0013596193065941536, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.01648229101614774, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.9982 | 8.0 | 80 | 3.9414 | 0.0306 | 0.0944 | 0.3554 | [0.3041052532757534, 0.16874660879001627, 0.6654845578699627, 0.2502437796860747, 0.47240134425999425, 0.035202401300704546, 0.0, 0.0, 0.00223447952118296, 0.0, 0.0022180941026423047, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.017687682044536315, nan, 0.007851227828877618, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.4337326351529063, 0.8073727933541017, 0.9668906266355992, 0.2983801097429704, 0.9516516658027425, 0.04987537063944903, 0.0, nan, 0.0024656313777492306, nan, 0.0039058685675227027, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.03238709677419355, nan, 0.2304457527333894, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.9845 | 10.0 | 100 | 3.8601 | 0.0342 | 0.0922 | 0.3594 | [0.2922888303737397, 0.18416615155261434, 0.7138469538187262, 0.2908211060948081, 0.4744411224037019, 0.03968080659350258, 0.0, 0.0, 0.001517351802204349, 0.0, 0.0022640195053988156, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.018242732924066078, nan, 0.0017321694198700385, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 2.8339851499178144e-05, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.41156142154635805, 0.8091600766834411, 0.968681789456952, 0.3865645499450535, 0.9405306064747184, 0.07513201261650777, 0.0, nan, 0.0016886466998870782, nan, 0.004442925495557075, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.04051612903225806, nan, 0.0496215306980656, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 2.8485159232040106e-05, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.9115 | 12.0 | 120 | 3.7662 | 0.0350 | 0.0957 | 0.3679 | [0.3265435969524282, 0.16934551763002906, 0.6636468421680203, 0.2897614314115308, 0.47149638549909734, 0.04380312895641165, 0.0, 0.0, 0.0005679270662083396, 0.0, 0.006130608618391826, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015278462296697881, nan, 0.010168610506481272, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0005282032748603041, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.4532274904558098, 0.7651170221263679, 0.971015728890836, 0.35641711330230325, 0.9667886632214421, 0.09155237386445524, 0.0, nan, 0.0005905083551752359, nan, 0.02021286983692999, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.02666666666666667, nan, 0.17549761704513597, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.000541218025408762, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.373 | 14.0 | 140 | 3.8215 | 0.0372 | 0.0984 | 0.3417 | [0.26991850860573335, 0.16530055221116488, 0.7123547289221768, 0.2738610069198764, 0.48231636018519974, 0.05786979140613276, 9.198862686067905e-05, 0.0, 0.0016862458546456074, 0.0, 0.02003420473979966, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.019261446560092318, nan, 0.007664970445554957, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.00019437425374170438, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.33405064199124884, 0.7824906142663152, 0.9705233529637541, 0.3759840673910353, 0.9632387205545869, 0.1093312542084559, 0.0014820803018054433, nan, 0.0017093662912967356, nan, 0.11209842788790157, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.04881720430107527, nan, 0.2352116624614522, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.00019939611462428076, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.5013 | 16.0 | 160 | 3.7204 | 0.0442 | 0.1144 | 0.3742 | [0.2822290381362101, 0.16883505135525975, 0.769869116381959, 0.42689049843659893, 0.5227540223055124, 0.08708002259254995, 0.0, 0.0, 0.0026017430650178675, 0.0, 0.025639310935159428, 0.009667779926173316, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.015038243063588806, nan, 0.012236278183813329, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.017965912265996087, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.35847579880289154, 0.7274243150411375, 0.9714809659872913, 0.6052516099511295, 0.9581127708925645, 0.1875937673506515, 0.0, nan, 0.0026210283133216612, nan, 0.18718875109852554, 0.18952446588559613, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.03746236559139785, nan, 0.3313708999158957, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.01831595738620179, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.0534 | 18.0 | 180 | 3.6705 | 0.0398 | 0.1047 | 0.3590 | [0.2595413128417862, 0.1691003444727777, 0.6691088712369001, 0.3950990601122456, 0.488710147616145, 0.08173303157281467, 0.0, 0.0, 0.00037219304412555314, 0.0, 0.01478876769818023, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.019491939423546652, nan, 0.01307250031813607, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.32432393658991465, 0.7548326543653646, 0.9715623824791709, 0.547882215712823, 0.9655268104445806, 0.1723547270557938, 0.0, nan, 0.0003729526453738332, nan, 0.10082023239917977, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.03432258064516129, nan, 0.3167928231006448, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.9505 | 20.0 | 200 | 3.6377 | 0.0438 | 0.1112 | 0.3678 | [0.26400298102314823, 0.17894097661829383, 0.6821115646258503, 0.410048361489329, 0.5047935346343179, 0.10884219880665227, 0.0, 0.0, 0.0004998067757310318, 0.0, 0.009561699333935615, 0.004728386438127989, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.007261914394368477, nan, 0.02159535098018816, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3252516557874853, 0.7267753015416567, 0.9718647865918669, 0.6306095123076404, 0.962154888501432, 0.20256110382629858, 0.0, nan, 0.000502450091684192, nan, 0.08319500048823357, 0.12129565816678153, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.015440860215053764, nan, 0.40734510793383794, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 3.0578 | 22.0 | 220 | 3.6537 | 0.0451 | 0.1222 | 0.3593 | [0.22870103236122216, 0.19622748068958434, 0.7249775197407036, 0.3772988722818756, 0.5149043593745184, 0.09982382324289593, 0.0, 0.0, 0.0007557209629748489, 0.0, 0.014569782352633991, 0.009216695780086421, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.018160932115310977, nan, 0.026776481016398287, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.2726840892969578, 0.6869059030273984, 0.9721129130433097, 0.6556523630744534, 0.9622229552487316, 0.2844738987135415, 0.0, nan, 0.0007562650864524951, nan, 0.12259544966311883, 0.27636113025499653, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.04283870967741935, nan, 0.6097560975609756, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.6262 | 24.0 | 240 | 3.5562 | 0.0461 | 0.1126 | 0.3619 | [0.25249162473577647, 0.1713899366917153, 0.7479785775518312, 0.4099944447250139, 0.5263980211481225, 0.09259541002024647, 0.0, 0.0, 0.00026843835984162136, 0.0, 0.01243039541031554, 0.004964107555663706, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.04182277356867316, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0004244241978382661, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3073141572818784, 0.7517373592139948, 0.9708528959070767, 0.6089722192008942, 0.9649403892370765, 0.18693222761692124, 0.0, nan, 0.0002693546883255462, nan, 0.15105946684894053, 0.14059269469331495, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.4224838800112139, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.00042727738848060163, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.1935 | 26.0 | 260 | 3.5371 | 0.0503 | 0.1257 | 0.3861 | [0.2592627627413605, 0.23448298060674339, 0.8045334695044207, 0.4711903845759783, 0.5439948218681061, 0.1323668037643044, 0.0, 0.0, 0.0006859205776173286, 0.0, 0.010527128193254612, 0.011967601019293774, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0006862946949420081, nan, 0.03596172342402556, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.008365167442768313, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3189281814631503, 0.7273743909257928, 0.9712212086084371, 0.7761971022754977, 0.9593117928257648, 0.2969250215591074, 0.0, nan, 0.0006889264143711086, nan, 0.16648764769065522, 0.18125430737422468, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0012903225806451613, nan, 0.6184468741239136, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.008545547769612033, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.7211 | 28.0 | 280 | 3.5427 | 0.0516 | 0.1265 | 0.3748 | [0.2294616077087345, 0.29157708543046634, 0.7748089361925689, 0.4537673561604773, 0.5336684353686025, 0.08249123588787577, 0.0, 0.0, 0.00011381154876824864, 0.0, 0.011527198814896476, 0.03768333154908468, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.01451949955220235, nan, 0.045570752908091525, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0032033269641452175, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.2718217248882194, 0.7028017413531432, 0.9720159886482148, 0.7758820506857999, 0.9594793417421946, 0.3455245655692195, 0.0, nan, 0.0001139577527531157, nan, 0.16336295283663704, 0.24259131633356307, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.023010752688172043, nan, 0.599663582842725, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0032473081524525722, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.4548 | 30.0 | 300 | 3.4745 | 0.0499 | 0.1194 | 0.3827 | [0.2580433467252393, 0.22455156385712888, 0.7689438591321005, 0.4881180783311493, 0.5355030017540634, 0.09853975434194016, 0.0, 0.0, 0.0005738213399503722, 0.0, 0.007753699717925752, 0.011706118225326608, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0002589514819087078, nan, 0.051001821493624776, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.00048255698430270513, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3133905586240426, 0.7563503474718428, 0.9711242842133422, 0.75860297123654, 0.9606888354826718, 0.2820403775501766, 0.0, nan, 0.0005749686616179928, nan, 0.11327018845815838, 0.12474155754651964, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0004731182795698925, nan, 0.4945332211942809, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0004842477069446818, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.2382 | 32.0 | 320 | 3.4673 | 0.0504 | 0.1259 | 0.3895 | [0.26222648702282103, 0.22281694962920393, 0.8185534601471179, 0.4721752371819631, 0.5394373239992631, 0.12487740302156257, 0.0, 0.0, 0.0004243559622012689, 0.0, 0.009391225074071415, 0.008000897296893109, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0483148914604274, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.012959203537361996, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.32016673441248433, 0.727803738317757, 0.9711359151407536, 0.7978081410831024, 0.9658828518919937, 0.3053360267451064, 0.0, nan, 0.00042475162389797675, nan, 0.14686065813885363, 0.1474844934527912, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.64143537987104, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.01327408420213069, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.2969 | 34.0 | 340 | 3.4542 | 0.0513 | 0.1205 | 0.3870 | [0.26049032558862384, 0.25054959309629143, 0.7982952825529339, 0.48907253188312166, 0.5493000489770409, 0.13226443570776425, 0.0, 0.0, 0.0006463555886489616, 0.0, 0.00568755663028862, 0.007005686969422237, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0019563546591517805, nan, 0.06857668460857123, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0024445068839561673, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3206130597996318, 0.7328560587906382, 0.9713103790519243, 0.7751094241682075, 0.9571860159485625, 0.3354242714203022, 0.0, nan, 0.0006474872315517938, nan, 0.09623083683234059, 0.0585802894555479, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0036129032258064514, nan, 0.5643397813288478, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0024782088531874894, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.1499 | 36.0 | 360 | 3.3726 | 0.0517 | 0.1173 | 0.3980 | [0.2856413913278642, 0.2526999304884458, 0.7984448921817416, 0.45696016374832293, 0.5382620316154103, 0.10932427176516346, 0.0, 0.0, 0.0015249891441450757, 0.0, 0.0003795838665117025, 0.006896005072923272, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.002647918188458729, nan, 0.07690205011389521, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0024323330599315554, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3569296001402737, 0.7441189392123971, 0.9710002209876208, 0.79713303053375, 0.9616731853666965, 0.3154481341035546, 0.0, nan, 0.0015280698664622333, nan, 0.004735865638121277, 0.05995864920744314, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.004989247311827957, nan, 0.47322680123352956, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.002449723693955449, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.1863 | 38.0 | 380 | 3.4186 | 0.0525 | 0.1273 | 0.3811 | [0.2504889580544547, 0.25483478956990835, 0.8057009814640923, 0.4697067212416424, 0.5396347711025369, 0.10252636915407472, 0.0, 0.0, 0.0006507289713834188, 0.0, 0.0069315981544529936, 0.018561161119251973, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.002767488397837101, nan, 0.05764870992119184, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.009798765466271763, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.30275526225601546, 0.705617461458583, 0.9713530257857661, 0.7609508557026213, 0.9651655330935289, 0.3440242879587955, 0.0, nan, 0.0006526671294042082, nan, 0.09403378576310907, 0.18332184700206755, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.005591397849462366, nan, 0.7485281749369218, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.010083746368142198, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.2225 | 40.0 | 400 | 3.4099 | 0.0492 | 0.1205 | 0.3773 | [0.2537452937686083, 0.18449030169688693, 0.7786365825991018, 0.46449828049577246, 0.5412716634029048, 0.11733757797546439, 0.0, 0.0, 0.0005726519978332086, 0.0, 0.0073889115368936, 0.0026514147191394263, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.057129018866183864, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.005376943221708364, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.30811116690178453, 0.728422797348031, 0.9713297639309433, 0.7279642041384277, 0.9653016665881281, 0.27953597712962636, 0.0, nan, 0.0005749686616179928, nan, 0.09203202812225368, 0.048242591316333565, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.6944210821418559, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.005497635731783741, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 1.833 | 42.0 | 420 | 3.3450 | 0.0560 | 0.1255 | 0.4073 | [0.319936125906051, 0.2725346971353349, 0.8138330009289997, 0.5001258388216503, 0.5466554094086977, 0.12377822173530106, 0.0, 0.0, 0.0013409526953149175, 0.0, 0.004598588323553068, 0.0075476219000838625, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.004770867072263317, nan, 0.08357855572467444, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.01140652753108348, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.4075142066964748, 0.7193765476475756, 0.9713569027615698, 0.7527632649846412, 0.9661760624957458, 0.34033856658515554, 0.0, nan, 0.001346773441627731, nan, 0.06224978029489308, 0.08683666436940041, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.009376344086021506, nan, 0.6927389963554808, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.011707400444368484, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.0316 | 44.0 | 440 | 3.3733 | 0.0538 | 0.1290 | 0.3964 | [0.2859205665261104, 0.2945810042027561, 0.8125164173393047, 0.46725364439491945, 0.5427961592094362, 0.10326615375755051, 0.0, 0.0, 0.0012484201294848977, 0.0, 0.004110036325047957, 0.01322937625754527, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.037355656984203775, nan, 0.06365003760340937, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.008933789763713784, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3560337613274992, 0.7124570652608035, 0.9713762876405888, 0.7659091676261988, 0.964018870196713, 0.36009025292081603, 0.0, nan, 0.0012535352802842729, nan, 0.04916512059369202, 0.18125430737422468, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.07638709677419354, nan, 0.711802635267732, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.009143736113484874, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 1.7264 | 46.0 | 460 | 3.3351 | 0.0549 | 0.1237 | 0.3989 | [0.2948823282162118, 0.24265564314301968, 0.8097134839089304, 0.4768203409948526, 0.544499023495295, 0.11367812320564351, 0.0, 0.0, 0.0009235373026519451, 0.0, 0.004245009273275516, 0.006109259820256488, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.004259190340321714, nan, 0.07332351927835375, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.011091085539997228, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.37015677179223555, 0.736740154964454, 0.9709730821569943, 0.7653915828716953, 0.9634481567001241, 0.32742672856788463, 0.0, nan, 0.0009272017155821687, nan, 0.05106923152035934, 0.0833907649896623, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.008860215053763441, nan, 0.6562938043173535, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.011394063692816044, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 1.9152 | 48.0 | 480 | 3.3416 | 0.0535 | 0.1244 | 0.3956 | [0.28584432792206127, 0.2464477707388237, 0.8098088290455681, 0.4647101057424465, 0.5453601518927258, 0.11637491362574447, 0.0, 0.0, 0.000902136258660508, 0.0, 0.005328897173914753, 0.006776967657506076, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.003578076525336091, nan, 0.07035055736529093, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.013375989409525911, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.3548844734555946, 0.7387970285166546, 0.9709304354231525, 0.7671168653867071, 0.9625161658524837, 0.3342429504672124, 0.0, nan, 0.0009064821241725114, nan, 0.06615564886241578, 0.09993108201240523, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0074408602150537635, nan, 0.6599383235211662, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.013815302227539452, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | | 2.3061 | 50.0 | 500 | 3.3175 | 0.0536 | 0.1241 | 0.3983 | [0.28886228584264156, 0.24371287587146617, 0.8069558270192259, 0.4761964588791857, 0.5448661292194517, 0.1137512299406072, 0.0, 0.0, 0.0009700472124042208, 0.0, 0.004326142142338239, 0.009256580850448365, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0036519462973908638, nan, 0.07551971444624385, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.006026594421253421, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.36078075062366005, 0.7357816119498363, 0.9711514230439687, 0.7818717815042213, 0.9586573048709611, 0.34005504955641397, 0.0, nan, 0.0009738207962538979, nan, 0.05175275851967581, 0.11026878015161957, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0074408602150537635, nan, 0.6405943369778525, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.006209764712584743, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
StevenPerrin/dqn-SpaceInvadersNoFrameskip-v4
StevenPerrin
2023-11-21T16:20:01Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T16:19:26Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 637.00 +/- 114.61 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StevenPerrin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StevenPerrin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga StevenPerrin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
ThuyNT03/CS341_Camera-COQE_UniCOQE_PA
ThuyNT03
2023-11-21T16:17:23Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-20T21:23:43Z
--- license: apache-2.0 base_model: T5-base tags: - generated_from_trainer model-index: - name: CS341_Camera-COQE_UniCOQE_PA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS341_Camera-COQE_UniCOQE_PA This model is a fine-tuned version of [T5-base](https://huggingface.co/T5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
cartesinus/xlm-r-base-amazon-massive-intent
cartesinus
2023-11-21T16:09:03Z
17
8
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "nlu", "intent-classification", "en", "dataset:AmazonScience/massive", "arxiv:2310.16609", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "doi:10.57967/hf/1244", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-22T07:55:27Z
--- language: - en license: mit tags: - generated_from_trainer - nlu - intent-classification datasets: - AmazonScience/massive metrics: - accuracy - f1 base_model: xlm-roberta-base model-index: - name: xlm-r-base-amazon-massive-intent results: - task: type: intent-classification name: intent-classification dataset: name: MASSIVE type: AmazonScience/massive split: test metrics: - type: f1 value: 0.8775 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-r-base-amazon-massive-intent This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on [Amazon Massive](https://huggingface.co/datasets/AmazonScience/massive) dataset (only en-US subset). It achieves the following results on the evaluation set: - Loss: 0.5439 - Accuracy: 0.8775 - F1: 0.8775 ## Results | domain | train-accuracy | test-accuracy | |:------:|:--------------:|:-------------:| |alarm|0.967|0.9846| |audio|0.7458|0.659| |calendar|0.9797|0.3181| |cooking|0.9714|0.9571| |datetime|0.9777|0.9402| |email|0.9727|0.9296| |general|0.8952|0.5949| |iot|0.9329|0.9122| |list|0.9792|0.9538| |music|0.9355|0.8837| |news|0.9607|0.8764| |play|0.9419|0.874| |qa|0.9677|0.8591| |recommendation|0.9515|0.8764| |social|0.9671|0.8932| |takeaway|0.9192|0.8478| |transport|0.9425|0.9193| |weather|0.9895|0.93| ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 2.734 | 1.0 | 720 | 1.1883 | 0.7196 | 0.7196 | | 1.2774 | 2.0 | 1440 | 0.7162 | 0.8342 | 0.8342 | | 0.6301 | 3.0 | 2160 | 0.5817 | 0.8672 | 0.8672 | | 0.4901 | 4.0 | 2880 | 0.5555 | 0.8770 | 0.8770 | | 0.3398 | 5.0 | 3600 | 0.5439 | 0.8775 | 0.8775 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1 ## Citation ```bibtex @article{kubis2023back, title={Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors}, author={Kubis, Marek and Sk{\'o}rzewski, Pawe{\l} and Sowa{\'n}ski, Marcin and Zi{\k{e}}tkiewicz, Tomasz}, journal={arXiv preprint arXiv:2310.16609}, year={2023} eprint={2310.16609}, archivePrefix={arXiv}, } ```
personal1802/22
personal1802
2023-11-21T16:00:07Z
4
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2023-11-21T15:53:15Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/white.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # kakarot28DCozy_cozy.safetensors <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/personal1802/22/tree/main) them in the Files & versions tab.
arpieb/peft-lora-starcoderbase-7b-personal-copilot-elixir
arpieb
2023-11-21T15:57:51Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigcode/starcoderbase-7b", "base_model:adapter:bigcode/starcoderbase-7b", "license:bigcode-openrail-m", "region:us" ]
null
2023-11-20T16:18:46Z
--- library_name: peft base_model: bigcode/starcoderbase-7b license: bigcode-openrail-m --- # Model Card for Model ID First pass at finetuning `bigcode/starcoderbase-7b` on the Elixir language subset of `bigcode/the-stack-dedup` ## Model Details ### Model Description - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [arpieb/peft-lora-starcoderbase-7b-personal-copilot-elixir](https://huggingface.co/arpieb/peft-lora-starcoderbase-7b-personal-copilot-elixir) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) ### Training Procedure Based on the finetuning workflow detailed in [Personal Copilot: Train Your Own Coding Assistant](https://huggingface.co/blog/personal-copilot), specifically the training code found under `personal_copilot/training` in the repo [pacman100/DHS-LLM-Workshop](https://github.com/pacman100/DHS-LLM-Workshop). Script used to train the model: ```bash python train.py \ --model_path "bigcode/starcoderbase-7b" \ --dataset_name "bigcode/the-stack-dedup" \ --subset "data/elixir" \ --data_column "content" \ --split "train" \ --seq_length 2048 \ --max_steps 2000 \ --batch_size 4 \ --gradient_accumulation_steps 4 \ --learning_rate 5e-4 \ --lr_scheduler_type "cosine" \ --weight_decay 0.01 \ --num_warmup_steps 30 \ --eval_freq 100 \ --save_freq 100 \ --log_freq 25 \ --num_workers 4 \ --bf16 \ --no_fp16 \ --output_dir "peft-lora-starcoderbase-7b-personal-copilot-rtx4090-elixir" \ --push_to_hub "false" \ --fim_rate 0.5 \ --fim_spm_rate 0.5 \ --use_flash_attn \ --use_peft_lora \ --lora_r 32 \ --lora_alpha 64 \ --lora_dropout 0.0 \ --lora_target_modules "c_proj,c_attn,q_attn,c_fc,c_proj" \ --use_4bit_qunatization \ --use_nested_quant \ --bnb_4bit_compute_dtype "bfloat16" ``` #### Preprocessing N/A #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). NOTE the RTX-4090 is not available in the above estimator; will update once there is data available. - **Hardware Type:** NVIDIA GeForce RTX 4090 - **Hours used:** ~9h (actual run timing lost :facepalm:) - **Cloud Provider:** Local rig - **Compute Region:** N/A - **Carbon Emitted:** N/A ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure #### Hardware Local DL rig with the following configuration: - NVIDIA GeForce RTX 4090 - Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz - 128GB RAM #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2.dev0
justinsu/projektron-model
justinsu
2023-11-21T15:57:41Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "german", "deutsch", "de", "en", "dataset:Christoph911/German-legal-SQuAD", "dataset:philschmid/test_german_squad", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-21T13:37:44Z
--- language: - de - en pipeline_tag: text-generation inference: false tags: - pytorch - llama - llama-2 - german - deutsch datasets: - Christoph911/German-legal-SQuAD - philschmid/test_german_squad --- **Please Check out EM German, our new german-speaking LLM model family with significantly improved capabilites. EM German is available in Llama2 7b,13b and 70b and Mistral- and LeoLM-based versions! All information and download links can be found [here](https://github.com/jphme/EM_German/blob/main/README.md).** # Llama 2 13b Chat German Llama-2-13b-chat-german is a variant of [Meta](https://huggingface.co/meta-llama)´s [Llama 2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model, finetuned on an additional dataset in German language. This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count. Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Llama 2 Chat. I am working on improving the model´s capabilities and will update the model if there is sufficient interest. A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/Llama-2-13b-chat-german-GGML). Please note the license of the base model, which is contained in the repo under LICENSE.TXT and see the original model card below for more information. ## Data * Prorietary German Conversation Dataset * German SQuAD and German legal SQuAD data (see datasets), augmented with "wrong" contexts, to improve factual RAG ## Prompt Template Llama2 Chat uses a new prompt format: ``` <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Please answer in the same language as the user. <</SYS>> This is a test question[/INST] This is a answer </s><s> ``` See also the original implementation [here](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L213). There is also a (so far non-documented?) method right in transformers to generate the correct tokenization: [LLamaTokenizer._build_conversation_input_ids](https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/models/llama/tokenization_llama.py#L334). ## Results I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations. More detailed evaluations and examples will be posted soon. ## Examples *These examples were not in the training set. They have been generated with a quantized version (Q4_0 for the German finetuned version, Q4_1 for the original Llama2 Chat version), so FP16 results should be better.* ### Example 1 (General Knowledge) **Question**: > Wie heißt die Hauptstadt von Nordrhein-Westfalen? **Answer Original Llama 2 13b Chat**: > Hello! As your helpful and respectful assistant, I'm here to help you with any questions you may have. The main city of North Rhine-Westphalia is called Düsseldorf. Is there anything else you would like to know about this topic or the region in general? Please let me know if there's anything else I can assist you with! **Answer Llama 2 13b Chat German**: > Die Hauptstadt von Nordrhein-Westfalen ist Düsseldorf. ### Example 2 (RAG/Factual Retrieval): **Question**: > Warum ist die Mongolische Volksrepublik nicht dem Warschauer Pakt beigetreten? > > Kontext zur Beantwortung der Frage: Warschauer_Pakt > > === Bilaterale Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand === Mit den bilateralen Bündnisverträgen wurde die Verpflichtung zur gegenseitigen Hilfeleistung unterzeichnet, um alle gewaltsamen militärischen Handlungen, die sich gegen die territoriale Integrität und Souveränität einer Vertragspartei richteten, zu verhindern. Den ersten dieser Freundschaftsverträge hatte die Sowjetunion schon während des Krieges am 12. Dezember 1943 mit der tschechoslowakischen Exilregierung abgeschlossen, der am 27. November 1963 für die Tschechoslowakei verlängert wurde. Von 1943 bis 1949 gab es bereits 23 bilaterale Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand (VFZ) der ersten Generation in Osteuropa. Neben diesem Vertragssystem bestanden ab 1956/57 auch weitere Abkommen: * Truppenstationierungsabkommen der Sowjetunion mit der DDR (12. März 1957), * Truppenstationierungsabkommen der Sowjetunion mit der Volksrepublik Polen (17. Dezember 1956), * Truppenstationierungsabkommen der Sowjetunion mit Rumänien (15. April 1957) und * Truppenstationierungsabkommen der Sowjetunion mit Ungarn (27. Mai 1957) jeweils mit einer Laufzeit von 20 Jahren. Aber bereits der Vertrag über die Beziehungen zwischen der DDR und der Sowjetunion vom 20. September 1950 zur Grenzregelung enthielt eine Vereinbarung zur Stationierung von sowjetischen Truppen auf dem Gebiet der DDR. Im Juli 1963 bat auch die Mongolische Volksrepublik, dem Warschauer Pakt nach Artikel 9 des Warschauer Vertrages beizutreten. Dazu hätte ein Sonderprotokoll gefasst werden müssen, da der Vertragstext nach Artikel 4 sich nur auf Europa bezog. Aufgrund des aufkommenden Chinesisch-sowjetischen Zerwürfnisses gab es keinen Beitritt, sondern es blieb bei einem Beobachterstatus. Stattdessen wurden ab 1966 sowjetische Truppenstationierungen vereinbart. Nachdem die Sowjetunion mit der DDR am 12. Juni 1964 einen Vertrag über Freundschaft, Zusammenarbeit und gegenseitigen Beistand schloss, der die volle Einbeziehung der DDR in das bilaterale Bündnissystem vorsah, wurden in der Zeit von 1964 bis 1972 insgesamt 20 Bündnisverträge der zweiten Generation unterzeichnet: Die Hegemonie der Sowjetunion wurde durch die bilateralen Bündnisverträge noch verstärkt, da diese eine unmittelbare Beistandsverpflichtung bei einem bewaffneten Angriff vorsahen, die auch in den meisten Verträgen sich nicht nur auf Europa beschränkte. In der dritten Generation der Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand erfolgte nach der Unterzeichnung der Schlussakte von Helsinki der Konferenz über Sicherheit und Zusammenarbeit in Europa (KSZE) im August 1975 die Einbeziehung der Breschnew-Doktrin und der ökonomischen Integration. Zudem wurde die Laufzeit der Verträge auf 25 Jahre festgelegt. Die DDR unterzeichnete am 7. Oktober 1975 mit der Sowjetunion diesen neuen Vertrag sowie später weitere mit Ungarn (24. März 1977), Polen (29. Mai 1977), Bulgarien (14. September 1977) und der Tschechoslowakei (3. Oktober 1977). > > Bitte beantworte die Frage mit einem kurzen, wörtlichen Zitat aus dem Kontext. Sollte der Kontext keine Antwort auf die Frage enthalten, bitte antworte dass die Frage mit den gegebenen Informationen nicht zu beantworten ist. **Answer Original Llama 2 13b Chat**: > Die Mongolische Volksrepublik hat nicht dem Warschauer Pakt beigetreten, weil es keinen Beitritt gab, sondern nur einen Beobachterstatus. (Quelle: Der Kontext) > **Answer Llama 2 13b Chat German**: > Aufgrund des aufkommenden Chinesisch-sowjetischen Zerwürfnisses gab es keinen Beitritt, sondern es blieb bei einem Beobachterstatus ### Example 3 (RAG / Factual Retrieval negative): **Question**: > Nach was benannte Spielberg seine Produktionsfirma Anfang der 1980er? > > Kontext zur Beantwortung der Frage: Webbrowser > > == Marktanteile und deren Messung == Bild zeigt die lt. Statistik von StatCounter meistverwendeten Browser nach Ländern 9/2019. Die Statistik für März 2020 ist über folgenden Weblink abrufbar: Die tatsächliche Verbreitung eines Webbrowsers ist nicht zweifelsfrei feststellbar. Verschiedene Anbieter veröffentlichen Statistiken über die Verbreitung von Webbrowsern aufgrund unterschiedlicher häufig recht begrenzter Datenbasen. Da die generelle Verbreitungsrate eines Browsers von verschiedensten Faktoren beeinflusst wird, sind diese Statistiken unterschiedlich aussagekräftig und kommen zu teilweise stark unterschiedlichen, scheinbar widersprüchlichen Ergebnissen. So schwankt die Verbreitung eines Browsers je nach Themengebiet einer aufgerufenen Webseite, Herkunftsregion der aufrufenden Person und dem Zeitpunkt der Messung. Beispielsweise können Benutzer an ihrem Arbeitsplatz an die Verwendung eines vorgegebenen Webbrowsers gebunden sein, privat jedoch einen anderen Browser bevorzugen und verwenden. Auch verschiedene Ereignisse führen zu starken Schwankungen. So steigt der Marktanteil bei der Veröffentlichung einer neuen Hauptversion oder sinkt beim Bekanntwerden einer Sicherheitslücke. Insgesamt geben Browserstatistiken nur die grobe Verteilung und ggf. Trends an, da jede Statistik nur auf der Auswertung der Logdateien ausgewählter Websites beruht und es keine unbestritten repräsentative Auswahl dafür zu verwendender Websites gibt. Die Messungen werden in der Regel anhand des sogenannten ''User-Agent''-Headers durchgeführt, mit der sich der verwendete Browser identifizieren lässt und die dieser bei jeder Anfrage an den Server übermittelt. Bei praktisch allen Browsern kann diese Kennung vom Benutzer verändert werden. Die weltweiten Marktanteile von Desktop-Browsern sehen laut Netmarketshare Stand Juli 2018 und September 2019 prozentual wie folgt aus: Bei Berücksichtigung auch der Browser in den inzwischen sehr häufig verwendeten mobilen Geräten wie Smartphone, Tablet etc. ergibt sich laut den veröffentlichten Statistiken von Netmarketshare, für September 2019 folgendes Bild: Chrome 66 %, Safari 19 %, Firefox 4 %, IE 2,6 %, Edge 2,5 %, QQ 1,4 % etc. Die Verbreitung laut Statistik von StatCounter der verschiedenen Browser auf mobilen Geräten ist hier abrufbar. Die Auswahl der Anzeige ist editierbar. > > Bitte beantworte die Frage mit einem kurzen, wörtlichen Zitat aus dem Kontext. Sollte der Kontext keine Antwort auf die Frage enthalten, bitte antworte dass die Frage mit den gegebenen Informationen nicht zu beantworten ist. **Answer Original Llama 2 13b Chat**: > Sure! Here's the answer to your question with a direct quote from the context: "According to Netmarketshare, in July 2018 and September 2019, the worldwide market share of desktop browsers looked like this: Chrome 66%, Safari 19%, Firefox 4%, IE 2.6%, Edge 2.5%, QQ 1.4% etc." So, according to Netmarketshare's statistics, in July 2018 and September 2019, Google Chrome had a market share of 66%. > **Answer Llama 2 13b Chat German**: > Mit den gegebenen Informationen ist diese Frage nicht zu beantworten. ---------------------------- ---------------------------- # Original **Llama 2** model card Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
pcuenq/Llama-2-7b-chat-gguf
pcuenq
2023-11-21T15:50:29Z
5
1
null
[ "gguf", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:quantized:meta-llama/Llama-2-7b-chat-hf", "endpoints_compatible", "region:us" ]
null
2023-11-21T15:08:08Z
--- base_model: meta-llama/Llama-2-7b-chat-hf --- 16-bit gguf version of https://huggingface.co/meta-llama/Llama-2-7b-chat-hf For quantized versions, see https://huggingface.co/models?search=thebloke/llama-2-7b-chat
KBlueLeaf/kohaku-xl-beta7.1
KBlueLeaf
2023-11-21T15:46:31Z
114
5
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-02T14:39:53Z
--- license: creativeml-openrail-m ---
jguevara/dqn-SpaceInvadersNoFrameskip-v4
jguevara
2023-11-21T15:42:14Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T15:41:40Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 545.00 +/- 94.47 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jguevara -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jguevara -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jguevara ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
digiplay/MyTale_v2.0_kurotsune
digiplay
2023-11-21T15:37:31Z
0
0
null
[ "license:other", "region:us" ]
null
2023-11-21T15:28:44Z
--- license: other --- Model info: https://civitai.com/models/185333?modelVersionId=212378 Author's civitai page: https://civitai.com/user/kurotsune Sample image: ![tmp6yn8hg45.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/0m7W8R-luhy3UPHfzu5eZ.png)
BlitherBoom/ppo-LunarLander-v2
BlitherBoom
2023-11-21T15:30:30Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T15:30:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 201.11 +/- 97.21 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Sigurdur/ice-tokenizer
Sigurdur
2023-11-21T15:30:25Z
0
0
transformers
[ "transformers", "is", "endpoints_compatible", "region:us" ]
null
2023-11-21T15:21:21Z
--- language: - is library_name: transformers --- # Icelandic Tokenizer README ## Overview This BPE (Byte Pair Encoding) tokenizer is designed for the Icelandic GPT model, available at [Sigurdur/ice-gpt](https://huggingface.co/Sigurdur/ice-gpt). Trained on the Icelandic Gigaword Corpus ({IGC}-2022) - annotated version, it excels in accurately segmenting Icelandic text into meaningful tokens. ## Usage Integrate this tokenizer into your NLP pipeline for preprocessing Icelandic text. The following example demonstrates basic usage: ```python from transformers import GPT2Tokenizer # Load the tokenizer tokenizer = GPT2Tokenizer.from_pretrained("Sigurdur/ice-tokenizer") tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer("Halló heimur!")["input_ids"] ``` ## Citation If you use this tokenizer in your work, please cite the original source of the training data: ```bibtex @misc{20.500.12537/254, title = {Icelandic Gigaword Corpus ({IGC}-2022) - annotated version}, author = {Barkarson, Starkaður and Steingrímsson, Steinþór and Andrésdóttir, Þórdís Dröfn and Hafsteinsdóttir, Hildur and Ingimundarson, Finnur Ágúst and Magnússon, Árni Davíð}, url = {http://hdl.handle.net/20.500.12537/254}, note = {{CLARIN}-{IS}}, year = {2022} } ``` ## Feedback We welcome user feedback to enhance the tokenizer's functionality. Feel free to reach out with your insights and suggestions. Happy tokenizing! Sigurdur Haukur Birgisson (readme created with chatgpt)
AswanthCManoj/azma-llama2-7b-hf-lora-adapter
AswanthCManoj
2023-11-21T15:24:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-11-21T15:24:07Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
lucianosb/canarim-7B-GGUF
lucianosb
2023-11-21T15:12:30Z
67
2
null
[ "gguf", "text-generation", "pt", "license:llama2", "region:us" ]
text-generation
2023-11-17T07:44:27Z
--- inference: false language: - pt model_creator: Maicon Domingues model_link: https://huggingface.co/dominguesm/canarim-7b model_name: Canarim 7B model_type: llama quantized_by: lucianosb pipeline_tag: text-generation license: llama2 --- # Canarim 7B - GGUF - Criador do Modelo: [Maicon Domingues](https://nlp.rocks/) - Modelo Original: [Canarim-7B](https://huggingface.co/dominguesm/canarim-7b) Estes modelos não foram treinados para seguir instruções (instruction tuning). Ou seja, eles não são chatbots. Eles funcionam bem em tarefas few-shot: você passa exemplos de entrada e saída, seguidos por um novo exemplo de entrada, daí o modelo gera o texto complementar (a resposta). ## Arquivos Incluídos | Nome | Método Quant | Bits | Tamanho | Desc | | ---- | ---- | ---- | ---- | ----- | | [canarim7b-q2_k.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q2_k.gguf) | q2_K | 2 | 2.83 GB | Quantização em 2-bit. Significativa perda de qualidade. Não-recomendado. | | [canarim7b-q3_k_m.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q3_k_m.gguf) | q3_K_M| 3 | 3.3 GB | Quantização em 3-bit. | | [canarim7b-q3_k_s.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q3_k_s.gguf) | q3_K_S | 3 | 2.95 GB | Quantização em 3-bit. | | [canarim7b-q4_0.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q4_0.gguf) | q4_0 | 4 | 3.83 GB | Quantização em 4-bit. Prefira usar o Q3_K_M| | [canarim7b-q4_k_s.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q4_k_s.gguf) | q4_K_S | 4 | 3.86 GB | Quantização em 4-bit. | | [canarim7b-q3_k_l.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q3_k_l.gguf) | q3_K_L | 3 | 3.6 GB | Quantização em 3-bit com menor perda de qualidade. | | [canarim7b-q4_k_m.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q4_k_m.gguf) | q4_K_M | 4 | 4.08 GB | Quantização em 4-bit. | | [canarim7b-q4_1.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q4_1.gguf) | q4_1 | 4 | 4.24 GB | Quantização em 4-bit. Acurácia maior que q4_0 mas não tão boa quanto q5_0. Inferência mais rápida que os modelos q5. | | [canarim7b-q5_0.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q5_0.gguf) | q5_0 | 5 | 4.65 GB | Quantização em 5-bit. Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [canarim7b-q5_1.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q5_1.gguf) | q5_1 | 5 | 5.06 GB | Quantização em 5-bit. Ainda Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [canarim7b-q5_k_m.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q5_k_m.gguf) | q5_K_M | 5 | 4.78 GB | Quantização em 5-bit. Melhor performance. Recomendado. | | [canarim7b-q5_k_s.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q5_k_s.gguf) | q5_K_S | 5 | 4.65 GB | Quantização em 5-bit. | | [canarim7b-q6_k.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q6_k.gguf) | q6_K | 6 | 5.53 GB | Quantização em 6-bit. | | [canarim7b-q8_0.gguf](https://huggingface.co/lucianosb/canarim-7b-GGUF/blob/main/canarim7b-q8_0.gguf) | q8_0 | 8 | 7.16 GB | Quantização em 8-bit. Quase indistinguível do float16. Usa muitos recursos e é mais lento. | **Observação**: os valores de RAM acima não pressupõem descarregamento de GPU. Se as camadas forem descarregadas para a GPU, isso reduzirá o uso de RAM e usará VRAM. ## Como executar com `llama.cpp` Usei o seguinte comando. Para melhores resultados forneça exemplos de resultados esperados. Exemplo: > Complete a classificação da última frase de acordo com os exemplos fornecidos. > > Isso é demais! // Negativo > > Isso é ruim! // Positivo > > Que filme maravilhoso! // Positivo > > Que série horrível! // ``` ./main -m ./models/canarim-7b-GGUF/canarim7b-q5_1.gguf --color --temp 0.5 -n 256 -p "### Instrução: {comando} ### Resposta:" ``` Para compreender os parâmetros, veja [a documentação do llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) Experimente gratuitamente no Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lucianosb/canarim-notebooks/blob/main/canarim_7b_llamacpp_5_k_m.ipynb) ## Sobre o formato GGUF GGUF é um novo formato introduzido pela equipe llama.cpp em 21 de agosto de 2023. É um substituto para o GGML, que não é mais suportado pelo llama.cpp. O principal benefício do GGUF é que ele é um formato extensível e à prova de futuro que armazena mais informações sobre o modelo como metadados. Ele também inclui código de tokenização significativamente melhorado, incluindo pela primeira vez suporte total para tokens especiais. Isso deve melhorar o desempenho, especialmente com modelos que usam novos tokens especiais e implementam modelos de prompt personalizados. Aqui está uma lista de clientes e bibliotecas que são conhecidos por suportar GGUF: - [llama.cpp](https://github.com/ggerganov/llama.cpp). - [ollama](https://ollama.ai/) - servidor com interfaces REST e CLI - [Faraday.dev](https://faraday.dev/) - App para Windows e Mac - [lollms-webui](https://github.com/ParisNeo/lollms-webui) - Lord of Large Language Models Web User Interface - [text-generation-webui](https://github.com/oobabooga/text-generation-webui), a interface web mais amplamente utilizada. Suporta GGUF com aceleração GPU via backend ctransformers - backend llama-cpp-python deve funcionar em breve também. - [KoboldCpp](https://github.com/LostRuins/koboldcpp), agora suporta GGUF a partir da versão 1.41! Uma poderosa interface web GGML, com aceleração total da GPU. Especialmente bom para contar histórias. - [LM Studio](https://lmstudio.ai), versão 0.2.2 e posteriores suportam GGUF. Uma GUI local totalmente equipada com aceleração GPU em ambos Windows (NVidia e AMD) e macOS. - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), agora deve funcionar, escolha o backend c_transformers. Uma ótima interface web com muitos recursos interessantes. Suporta aceleração GPU CUDA. - [ctransformers](https://github.com/marella/ctransformers), agora suporta GGUF a partir da versão 0.2.24! Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor AI compatível com OpenAI. - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), suporta GGUF a partir da versão 0.1.79. Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor API compatível com OpenAI. - [candle](https://github.com/huggingface/candle), adicionou suporte GGUF em 22 de agosto. Candle é um framework ML Rust com foco em desempenho, incluindo suporte GPU e facilidade de uso. - [LocalAI](https://github.com/go-skynet/LocalAI), adicionou suporte GGUF em 23 de agosto. LocalAI provê uma API Rest para modelos LLM e de geração de imagens. ## Template ```` ### Instrução: {prompt} ### Resposta: ````
GenzNepal/mt5-summarize-nepali
GenzNepal
2023-11-21T15:10:46Z
549
4
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "summarization", "nepali", "ne", "dataset:Someman/news_nepali", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-07-19T04:47:50Z
--- language: - ne license: apache-2.0 tags: - generated_from_trainer - summarization - nepali datasets: - Someman/news_nepali base_model: google/mt5-small model-index: - name: mt5-summarize-nepali results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-summarize-nepali This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on [Someman/news_nepali](https://huggingface.co/datasets/Someman/news_nepali) It achieves the following results on the evaluation set: - Loss: 0.6748 ## Usage ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # Predict with test data (first 5 rows) >>> model_ckpt = "GenzNepal/mt5-summarize-nepali" >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> t5_tokenizer = AutoTokenizer.from_pretrained(model_ckpt) >>> model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt).to(device) >>> text = "काठमाडौँ । हाल देशको पूर्वी तथा मध्य भू–भागमा मनसुनी प्रणालीको प्रभाव रहेको छ भने बाँकी भू–भागमा स्थानीय वायु र पश्चिमी वायुको आंशिक प्रभाव रहेको छ । यसका कारण हाल गण्डकी प्रदेशका थोरै स्थानमा र कर्णाली प्रदेशका एक–दुई स्थानमा मेघगर्जनरचट्याङसहित हल्कादेखि मध्यम वर्षा भइरहेको जल तथा मौसम विज्ञान विभाग, मौसम पूर्वानुमान महाशाखाले जनाएको छ । \ महाशाखका मौमसविद् रोजल लामिछानेका अनुसार पछिल्लो तीन घन्टामा गण्डकी प्रदेशका थोरै स्थान, बागमती प्रदेशका एक–दुई स्थानमा हल्कादेखि मध्यम वर्षा भइरहेको छ । काठमाडौँ उपत्यकासहित बागमती प्रदेशमा रातिको समयमा वर्षाको सम्भावना रहेको छ । यस्तै कोशी प्रदेश, मधेश प्रदेश र देशका पहाडी भू–भागमा बदली रहनुका साथै हल्का वर्षाको सम्भावना रहेको महाशाखाले उल्लेख गरेको छ । \ मौसमविद् लामिछानेले मनसुन प्रणाली क्रमिकरूपमा देशभर फैलिने क्रममा रहेको र यो देशभर विस्तार हुन अझै एक साता लाग्ने बताए । गत जेठ ३१ गते बुधबार नेपालको पूर्वी भेग भएर मनसुन प्रणाली भित्रिएको थियो । मनसुन सुस्तगतिमा रहेकाले देशको पश्चिम क्षेत्रमा फैलिन केही दिन लाग्ने जनाइएको छ ।" >>> inputs = t5_tokenizer(text, return_tensors="pt", max_length=1024, padding= "max_length", truncation=True, add_special_tokens=True) >>> generation = model.generate( input_ids = inputs['input_ids'].to(device), attention_mask=inputs['attention_mask'].to(device), num_beams=6, num_return_sequences=1, no_repeat_ngram_size=2, repetition_penalty=1.0, min_length=100, max_length=250, length_penalty=2.0, early_stopping=True ) # # Convert id tokens to text >>> output = t5_tokenizer.decode(generation[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) >>> print(output) "हाल देशको पूर्वी तथा मध्य भू–भागमा मनसुनी प्रणालीको प्रभाव रहेको छ । बाँकी भूभागहरूमा स्थानीय वायु र पश्चिमी वायुको आंशिक सङ्क्रमण छ। गत वैशाख ३१ गते बुधबार नेपालको भेग भएर मनसुन प्रणाली भित्रिएको थियो भने हल्कादेखि मध्यम वर्षा भइरहेको जनाइएको छ भने मौसमविद् लामिछानेले उल्लेख गरेका छन् भने यो देशभर विस्तार हुन अझै एक साता लाग्नेछ। " ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 90 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7762 | 2.72 | 2500 | 0.7255 | | 0.6377 | 5.44 | 5000 | 0.6947 | | 0.5674 | 8.15 | 7500 | 0.6748 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
IlluminatiPudding/a2c-PandaPickAndPlaceDense-v3_v20.1
IlluminatiPudding
2023-11-21T14:59:07Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlaceDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T14:52:31Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlaceDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlaceDense-v3 type: PandaPickAndPlaceDense-v3 metrics: - type: mean_reward value: -45.00 +/- 15.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlaceDense-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlaceDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hajarchane/falcon-7b-instruct-ft-adapters-PA
hajarchane
2023-11-21T14:30:43Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded", "region:us" ]
null
2023-11-21T14:30:36Z
--- library_name: peft base_model: vilsonrodrigues/falcon-7b-instruct-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
vihangd/shearedplats-2.7b-v2
vihangd
2023-11-21T14:26:18Z
2,566
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-18T03:51:50Z
--- license: llama2 --- <p><h1> ShearedPlats-7b </h1></p> An experimental finetune of Sheared LLaMA 2.7b with Alpaca-QLoRA (version 2) <h2> Datasets </h2> Trained on alpca style datasets <p><h2> Prompt Template </h2></p> Uses alpaca style prompt template <br/> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vihangd__shearedplats-2.7b-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 36.72 | | ARC (25-shot) | 42.41 | | HellaSwag (10-shot) | 72.58 | | MMLU (5-shot) | 27.52 | | TruthfulQA (0-shot) | 39.76 | | Winogrande (5-shot) | 65.9 | | GSM8K (5-shot) | 1.52 | | DROP (3-shot) | 7.34 |
colinperth/HERO
colinperth
2023-11-21T14:18:44Z
0
0
null
[ "region:us" ]
null
2023-11-21T14:18:18Z
I want you to act as a chef. My first request is "I need help preparing a meal for a group of people."
crom87/sd_base-db-real13-3e-06-priorp-500-gc
crom87
2023-11-21T14:11:02Z
2
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-21T13:13:50Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: TOKstyle georgeclooney tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - crom87/sd_base-db-real13-3e-06-priorp-500-gc This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on TOKstyle georgeclooney using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ![img_4](./image_4.png) ![img_5](./image_5.png) DreamBooth for the text encoder was enabled: True.
codircodir/videomae-base-finetuned-kinetics-finetuned-ucf101-subset-finetuned-N
codircodir
2023-11-21T14:04:16Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset", "base_model:finetune:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-11-21T12:12:35Z
--- license: cc-by-nc-4.0 base_model: sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-finetuned-ucf101-subset-finetuned-N results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-ucf101-subset-finetuned-N This model is a fine-tuned version of [sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6134 - Accuracy: 0.8358 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 171 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9407 | 0.33 | 57 | 0.7452 | 0.8022 | | 0.4492 | 1.33 | 114 | 0.8826 | 0.8022 | | 0.3679 | 2.33 | 171 | 0.7072 | 0.8022 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Tokenizers 0.15.0
bh8648/esg_test6-epoch3
bh8648
2023-11-21T13:59:09Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-21T13:58:59Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0
jguevara/q-FrozenLake-v1-4x4-noSlippery
jguevara
2023-11-21T13:43:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T13:43:12Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jguevara/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AswanthCManoj/azma-zephyr-lora-adapter-2
AswanthCManoj
2023-11-21T13:37:47Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceH4/zephyr-7b-alpha", "base_model:adapter:HuggingFaceH4/zephyr-7b-alpha", "region:us" ]
null
2023-11-21T13:36:55Z
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-alpha --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
alpcansoydas/bert-base-arabic-emotion-analysis-v3
alpcansoydas
2023-11-21T13:33:08Z
40
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:asafaya/bert-base-arabic", "base_model:finetune:asafaya/bert-base-arabic", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-21T13:22:26Z
--- base_model: asafaya/bert-base-arabic tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-arabic-emotion-analysis-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-arabic-emotion-analysis-v3 This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5940 - Accuracy: 0.7928 - F1: 0.7932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.9993 | 1.0 | 102 | 0.6655 | 0.7528 | 0.7510 | | 0.5781 | 2.0 | 204 | 0.6018 | 0.7721 | 0.7720 | | 0.4441 | 3.0 | 306 | 0.5940 | 0.7928 | 0.7932 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
npvinHnivqn/GGUF-metamath-llemma
npvinHnivqn
2023-11-21T13:33:03Z
2
0
null
[ "gguf", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-11-21T08:52:13Z
--- license: mit --- ## Quick start ```python from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("npvinHnivqn/GGUF-metamath-llemma", model_file="metamath-llemma.gguf", model_type="llama", gpu_layers=0, context_length=768) model('''AI will ''', temperature=0.1) ```
shadowlamer/sd-zxspectrum-model-256
shadowlamer
2023-11-21T13:30:32Z
48
0
diffusers
[ "diffusers", "text-to-image", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-30T07:35:08Z
--- pipeline_tag: text-to-image --- This model is pretrained on a collection of ZX-Spectrum images from zxart.ee It generates images like this: ![](samples/result_0.png) ![](samples/result_1.png) ![](samples/result_2.png) ![](samples/result_3.png) Usage example: ```python from diffusers import DiffusionPipeline import torch PROMPT = "Cute cat" SEED = 123 STEPS = 20 pipe = DiffusionPipeline.from_pretrained("shadowlamer/sd-zxspectrum-model-256", safety_checker=None, requires_safety_checker=False) generator = torch.Generator("cpu").manual_seed(SEED) image = pipe(PROMPT, height=192, width=256, num_inference_steps=STEPS, generator=generator, ).images[0] image.save("result.png") ```
922-CA/monika-ddlc-13b-v1-GGUF
922-CA
2023-11-21T13:26:18Z
5
3
null
[ "gguf", "dataset:922-CA/MoCha_v1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2023-11-21T12:35:14Z
--- license: llama2 datasets: - 922-CA/MoCha_v1 --- # monika-ddlc-13b-v1-GGUF: Merge of [lora](https://huggingface.co/922-CA/monika-ddlc-13b-v1-qloras) with [LLaMA-2 13b base](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF), done with llamacpp. ### USAGE This is meant to be mainly a chat model with limited RP ability. For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so: \nPlayer: (prompt)\nMonika: ### WARNINGS AND DISCLAIMERS This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character (for example, in testing she usually knows her own club and its members, her game, and even her height and favorite ice cream flavor, but may still get her eye color wrong or mistake her developer as being a member of her club). Additionally, being character-focused means that this model may not be the smartest model/not as capable as others for simple tasks. Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!
bupt/keywords_extract
bupt
2023-11-21T13:25:01Z
4
2
transformers
[ "transformers", "pytorch", "chatglm", "feature-extraction", "custom_code", "license:apache-2.0", "region:us" ]
feature-extraction
2023-11-21T12:52:19Z
--- license: apache-2.0 --- 利用chatgpt3.5-turbo制作的法律关键词数据集,在chatglm2-6b基模型上通过LoRA微调,得到一个可以理解法律咨询问题,并且提取法律相关关键词的模型。
grapemaid/db_model
grapemaid
2023-11-21T13:18:58Z
2
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-21T10:44:45Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - grapemaid/db_model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
100yen/distilbert-base-uncased-finetuned-emotion
100yen
2023-11-21T13:18:54Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-21T12:05:37Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9216782978856498 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2289 - Accuracy: 0.9215 - F1: 0.9217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8185 | 1.0 | 250 | 0.3311 | 0.9085 | 0.9077 | | 0.26 | 2.0 | 500 | 0.2289 | 0.9215 | 0.9217 | ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.0 - Datasets 2.15.0 - Tokenizers 0.15.0
Moonxc/dreambooth-truck
Moonxc
2023-11-21T13:18:25Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-21T13:01:48Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks truck tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Moonxc/dreambooth-truck This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks truck using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
nsuruguay05/EQASpa-7b
nsuruguay05
2023-11-21T13:08:02Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "es", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-16T18:46:32Z
--- language: - es --- ## EQASpa 7b This model is a fine-tuned version of [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) for the Extractive Question Answering task in spanish. It was fine-tuned on the [QuALES 2022](https://www.fing.edu.uy/inco/grupos/pln/quales/) dataset training partition using LoRA for one epoch. ## Prompt format To use the model, the following prompting format should be applied: ### TEXTO: {{Context document}} ### PREGUNTA: {{Question}} ### RESPUESTA: ## Evaluation We evaluate the model on the test partition of the QuALES dataset, and compare it with one-shot prompting as a baseline. Prompt | Model | Acc_exact | F_bertscore --- | --- | --- | --- one-shot prompting | [zephyr-7b-beta](HuggingFaceH4/zephyr-7b-beta) | 0.025 | 0.614 one-shot prompting | [Llama-2-13b-chat-hf](meta-llama/Llama-2-13b-chat-hf) | 0.192 | 0.700 default | EQASpa 7b | **0.225** | **0.713** ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
Sneka/mistral-json-finetune
Sneka
2023-11-21T13:03:58Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T12:17:50Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: mistral-json-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-json-finetune This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2986 | 0.57 | 50 | 0.6897 | | 0.4469 | 1.14 | 100 | 0.4239 | | 0.2978 | 1.71 | 150 | 0.3497 | | 0.2279 | 2.29 | 200 | 0.3126 | | 0.1869 | 2.86 | 250 | 0.2858 | | 0.137 | 3.43 | 300 | 0.2848 | | 0.1221 | 4.0 | 350 | 0.2713 | | 0.0918 | 4.57 | 400 | 0.2750 | | 0.0832 | 5.14 | 450 | 0.2781 | | 0.0681 | 5.71 | 500 | 0.2807 | | 0.055 | 6.29 | 550 | 0.2902 | | 0.0515 | 6.86 | 600 | 0.2746 | | 0.0463 | 7.43 | 650 | 0.3078 | | 0.0421 | 8.0 | 700 | 0.2971 | | 0.0354 | 8.57 | 750 | 0.3196 | | 0.0276 | 9.14 | 800 | 0.3490 | | 0.0281 | 9.71 | 850 | 0.3411 | | 0.026 | 10.29 | 900 | 0.3658 | | 0.0241 | 10.86 | 950 | 0.3661 | | 0.0209 | 11.43 | 1000 | 0.3730 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Jaykumaran17/Mistral-7B-v0.1-bf16-sharded-finetuned-mental-health-conversational
Jaykumaran17
2023-11-21T13:02:45Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:ybelkada/Mistral-7B-v0.1-bf16-sharded", "base_model:finetune:ybelkada/Mistral-7B-v0.1-bf16-sharded", "region:us" ]
null
2023-11-21T09:36:11Z
--- base_model: ybelkada/Mistral-7B-v0.1-bf16-sharded tags: - generated_from_trainer model-index: - name: Mistral-7B-v0.1-bf16-sharded-finetuned-mental-health-conversational results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-bf16-sharded-finetuned-mental-health-conversational This model is a fine-tuned version of [ybelkada/Mistral-7B-v0.1-bf16-sharded](https://huggingface.co/ybelkada/Mistral-7B-v0.1-bf16-sharded) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 150 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
cyanelis/moinsv1.25
cyanelis
2023-11-21T12:58:55Z
0
4
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-12T09:33:57Z
--- license: creativeml-openrail-m --- more tags are added in this model<br> the first type of tags that can be recognized : num1, num2, ..., num33<br> the second type of tags that can be recognized : moinszero, moinsun, moinsdeux<br> each num* represents a composition<br> moinszero = 2^(-0) = 1.00 = 100% complete<br> moinsun = 2^(-1) = 0.50 = 50% complete<br> moinsdeux = 2^(-2) = 0.25 = 25% complete<br> <br> default settings:<br> ddim 18 512×832 or 832×512<br> hire.fix lantent 832×1344 or 1344×832 with denoi=0.50~0.65<br> cfg 6.5<br> <br> enter QQ group 755638032 if you want to know the latest information<br> entrez le groupe QQ 755638032 si vous souhaitez connaître les dernières informations<br> 最新消息见Q群755638032<br>
aivanni/dqn-SpaceInvadersNoFrameskip-v4
aivanni
2023-11-21T12:57:30Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T12:56:56Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 634.00 +/- 141.88 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aivanni -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aivanni -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aivanni ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
nova-sqoin/finetuned_llama
nova-sqoin
2023-11-21T12:52:27Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:finetune:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2023-11-20T12:12:04Z
--- base_model: NousResearch/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: finetuned_llama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_llama This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.14.7 - Tokenizers 0.15.0
Beatsie13/gpt2-wikitext2
Beatsie13
2023-11-21T12:50:35Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T12:50:08Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5497 | 1.0 | 2249 | 6.4722 | | 6.1915 | 2.0 | 4498 | 6.1994 | | 6.0127 | 3.0 | 6747 | 6.1118 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
abrahamtek/Pixelcopter-PLE-v0
abrahamtek
2023-11-21T12:40:17Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T12:39:24Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 48.40 +/- 27.76 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ernlavr/xlm-roberta-base-IDMGSP-danish
ernlavr
2023-11-21T12:36:50Z
8
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "da", "dataset:ernlavr/IDMGSP-danish", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-19T21:57:27Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-IDMGSP-danish results: [] datasets: - ernlavr/IDMGSP-danish language: - da --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-IDMGSP-danish This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ernlavr/IDMGSP-danish](https://huggingface.co/datasets/ernlavr/IDMGSP-danish) dataset. It achieves the following results on the evaluation set: - Loss: 1.0276 - Accuracy: {'accuracy': 0.8530452362901187} - F1: {'f1': 0.8630636995172495} ## Intended uses & limitations Binary classification, label `0` - text is not AI generated; label `1` - text is AI generated ## Training and evaluation data [ernlavr/IDMGSP-danish](https://huggingface.co/datasets/ernlavr/IDMGSP-danish) dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:| | 0.3631 | 1.0 | 496 | 0.9902 | {'accuracy': 0.5959059893858984} | {'f1': 0.7104310032596887} | | 0.3208 | 2.0 | 992 | 1.0736 | {'accuracy': 0.7261814505938843} | {'f1': 0.780245411215901} | | 0.2191 | 3.0 | 1488 | 0.3496 | {'accuracy': 0.8664392216325499} | {'f1': 0.8717077315208156} | | 0.1548 | 4.0 | 1984 | 0.5604 | {'accuracy': 0.8155168056608542} | {'f1': 0.8378858538751943} | | 0.1127 | 5.0 | 2480 | 0.4164 | {'accuracy': 0.8641647712913824} | {'f1': 0.871056735036584} | | 0.1372 | 6.0 | 2976 | 0.5515 | {'accuracy': 0.8822340156684357} | {'f1': 0.8833833833833834} | | 0.0279 | 7.0 | 3472 | 0.7203 | {'accuracy': 0.8458428102097548} | {'f1': 0.8573766658873042} | | 0.0456 | 8.0 | 3968 | 0.8584 | {'accuracy': 0.8498862774829417} | {'f1': 0.8604651162790697} | | 0.0095 | 9.0 | 4464 | 0.9214 | {'accuracy': 0.8512762193580996} | {'f1': 0.861415283174379} | | 0.0076 | 10.0 | 4960 | 1.0276 | {'accuracy': 0.8530452362901187} | {'f1': 0.8630636995172495} | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.1 - Datasets 2.14.6 - Tokenizers 0.14.1
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_rand75_seed123
behzadnet
2023-11-21T12:29:55Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-11-21T04:14:55Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl
dariolopez
2023-11-21T12:17:42Z
9
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "es", "dataset:dariolopez/Llama-2-databricks-dolly-oasst1-es", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-03T18:58:58Z
--- license: apache-2.0 datasets: - dariolopez/Llama-2-databricks-dolly-oasst1-es language: - es --- Llama 2 (7B) fine-tuned on a [own Spanish instructions dataset](https://huggingface.co/datasets/dariolopez/Llama-2-databricks-dolly-oasst1-es). # Based on https://mlabonne.github.io/blog/posts/A_Beginners_Guide_to_LLM_Finetuning.html
oneonlee/KoAirBERT
oneonlee
2023-11-21T12:16:19Z
3
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "aviation", "aviation safety", "BERT", "Korean BERT", "Aviation BERT", "ko", "en", "arxiv:1810.04805", "license:agpl-3.0", "endpoints_compatible", "region:us" ]
null
2023-11-19T12:44:46Z
--- thumbnail: "KoAirBERT: Korean BERT Model Specialized for Aviation Safety Domain" license: agpl-3.0 language: - ko - en library_name: transformers tags: - aviation - aviation safety - BERT - Korean BERT - Aviation BERT --- <!--- Copyright (C) 2023 Donggeon Lee This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. --> <div align="center"> <h1>🤗 KoAirBERT ✈️</h1> <p>항공 안전 도메인에 특화된 한국어 BERT 모델</p> </div> <p align="center"> <img style="margin: 0.5em;" alt="Python" src="https://img.shields.io/badge/python-3.8-blue.svg"> <a style="margin: 0px;" href="https://huggingface.co/oneonlee/KoAirBERT"><img style="margin: 0.5em;" alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97-Models%20on%20Hub-yellow"></a> <a style="margin: 0px;" href="https://github.com/oneonlee/KoAirBERT/blob/master/LICENSE"><img style="margin: 0.5em;" alt="License: AGPL-v3" src="https://img.shields.io/badge/License-AGPL--v3-blue.svg"></a> <a style="margin: 0px;" href="https://doi.org/10.5281/zenodo.10158254"><img style="margin: 0.5em;" alt="DOI" src="https://img.shields.io/badge/DOI-10.5281%2Fzenodo.10158254-blue"></a> </p> ## How to use 🤗 [Huggingface Hub](https://huggingface.co/oneonlee/KoAirBERT/tree/main)에 업로드 된 모델을 바로 사용할 수 있습니다 :) ```python # Load model directly from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("oneonlee/KoAirBERT") model = AutoModelForPreTraining.from_pretrained("oneonlee/KoAirBERT") ``` ## Reference - [BERT](https://arxiv.org/abs/1810.04805) - [klue/bert-base](https://huggingface.co/klue/bert-base) - [huggingface/transformers](https://github.com/huggingface/transformers/) - pytorch [language-modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) examples - [항공 안전 정보지 (GYRO)](https://www.airsafety.or.kr/airsafety/board/gyro/list.do) - [항공위키](https://airtravelinfo.kr/wiki/) - [국토교통부 항공용어사전](https://www.airportal.go.kr/knowledge/library/KdMain01.jsp) - [항공안전 자율보고 백서(2021)](https://www.airsafety.or.kr/airsafety/board/aspds/view.do?bbsNo=4431) ## Citation 이 코드를 연구용으로 사용하는 경우 아래와 같이 인용해주세요. ```bibtex @software{lee_2023_10158254, author = {Lee, DongGeon}, title = {KoAirBERT: Korean BERT Model Specialized for Aviation Safety Domain}, month = nov, year = 2023, publisher = {Zenodo}, version = {v1.0.0}, doi = {10.5281/zenodo.10158254}, url = {https://doi.org/10.5281/zenodo.10158254} } ``` ## License `KoAirBERT`는 `AGPL-3.0` 라이선스 하에 공개되어 있습니다. 모델 및 코드를 사용할 경우 라이선스 내용을 준수해주세요. 라이선스 전문은 [LICENSE 파일](LICENSE)에서 확인하실 수 있습니다.
Aunsiels/AscentGenT-LM
Aunsiels
2023-11-21T12:14:50Z
5
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2306.12766", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T12:09:38Z
--- license: mit language: - en widget: - text: "fish lives in ocean[SEP]" example_title: "Mapping1" - text: "elephant be killed in africa[SEP]" example_title: "Mapping2" - text: "doctor write prescription[SEP]" example_title: "Mapping3" - text: "fish " example_title: "KB generation1" - text: "elephant capable of " example_title: "KB generation2" - text: "doctor at location " example_title: "KB generation3" - text: "Some air pollutants fall to earth in the form of acid rain.[SEP]" example_title: "Relation Extraction1" - text: "Elon Musk Races to Secure Financing for Twitter Bid.[SEP]" example_title: "Relation Extraction2" --- # Ascent-GenT LM-based Alignment <!-- Provide a quick summary of what the model is/does. --> This model is trained to translate an open triple (initially from Ascent++) into a closed triple that uses relationships from ConceptNet. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by: Julien Romero - **Model type: GPT2 - **Language(s) (NLP): English - **Finetuned from model: gpt2-large ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository: [https://github.com/Aunsiels/GenT](https://github.com/Aunsiels/GenT) - **Paper [optional]: [https://arxiv.org/pdf/2306.12766.pdf](https://arxiv.org/pdf/2306.12766.pdf) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> We observed good results by using a beam search decoding. Other decoding methods might be less adapted. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> You must give the open triple with subject, object, and predicate separated by a tabulation and then followed by [SEP]. Examples: ``` fish lives in ocean[SEP] elephant be killed in africa[SEP] doctor write prescription[SEP] ``` ### From Subject/Subject-Predicate <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> It is also possible to give a subject or a subject-predicate to generate a knowledge base directly. The output must be parsed correctly in this case. Examples: ``` fish elephant capable of doctor at location ``` ### From Text <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> When used with text as input, this model can behave like a relation extractor, although it was not trained on this task. Examples: ``` Some air pollutants fall to earth in the form of acid rain.[SEP] Elon Musk Races to Secure Financing for Twitter Bid.[SEP] ``` ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @InProceedings{10.1007/978-3-031-47240-4_20, author="Romero, Julien and Razniewski, Simon", editor="Payne, Terry R. and Presutti, Valentina and Qi, Guilin and Poveda-Villal{\'o}n, Mar{\'i}a and Stoilos, Giorgos and Hollink, Laura and Kaoudi, Zoi and Cheng, Gong and Li, Juanzi", title="Mapping and Cleaning Open Commonsense Knowledge Bases with Generative Translation", booktitle="The Semantic Web -- ISWC 2023", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="368--387", abstract="Structured knowledge bases (KBs) are the backbone of many knowledge-intensive applications, and their automated construction has received considerable attention. In particular, open information extraction (OpenIE) is often used to induce structure from a text. However, although it allows high recall, the extracted knowledge tends to inherit noise from the sources and the OpenIE algorithm. Besides, OpenIE tuples contain an open-ended, non-canonicalized set of relations, making the extracted knowledge's downstream exploitation harder. In this paper, we study the problem of mapping an open KB into the fixed schema of an existing KB, specifically for the case of commonsense knowledge. We propose approaching the problem by generative translation, i.e., by training a language model to generate fixed-schema assertions from open ones. Experiments show that this approach occupies a sweet spot between traditional manual, rule-based, or classification-based canonicalization and purely generative KB construction like COMET. Moreover, it produces higher mapping accuracy than the former while avoiding the association-based noise of the latter. Code and data are available. (https://github.com/Aunsiels/GenT, julienromero.fr/data/GenT)", isbn="978-3-031-47240-4" }
SE6446/A2C-LunarLander
SE6446
2023-11-21T12:12:35Z
0
0
stable-baselines3
[ "stable-baselines3", "reinforcement-learning", "license:unlicense", "region:us" ]
reinforcement-learning
2023-11-21T12:02:03Z
--- license: unlicense pipeline_tag: reinforcement-learning library_name: stable-baselines3 --- # A2C lunar lander An example of using A2C policy for RL, does surprisingly well for the little time it was trained for. # Usage ```python from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="SE6446/A2C-LunarLander", filename="A2C_lander.zip", ) ```
Zainiii/phi-1_5b-lora-math
Zainiii
2023-11-21T12:08:19Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "region:us" ]
null
2023-11-21T08:14:58Z
--- library_name: peft base_model: microsoft/phi-1_5 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
gaya/distilbert-base-uncased-finetuned-squad
gaya
2023-11-21T12:08:03Z
42
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-11-12T07:44:37Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2202 | 1.0 | 5533 | 1.1500 | | 0.951 | 2.0 | 11066 | 1.1284 | | 0.7434 | 3.0 | 16599 | 1.1624 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Aunsiels/QuasimodoGenT-Rule
Aunsiels
2023-11-21T12:07:05Z
4
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2306.12766", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T12:04:01Z
--- license: mit language: - en widget: - text: "fish lives in ocean[SEP]" example_title: "Mapping1" - text: "elephant be killed in africa[SEP]" example_title: "Mapping2" - text: "doctor write prescription[SEP]" example_title: "Mapping3" - text: "fish " example_title: "KB generation1" - text: "elephant capable of " example_title: "KB generation2" - text: "doctor at location " example_title: "KB generation3" - text: "Some air pollutants fall to earth in the form of acid rain.[SEP]" example_title: "Relation Extraction1" - text: "Elon Musk Races to Secure Financing for Twitter Bid.[SEP]" example_title: "Relation Extraction2" --- # Quasimodo-GenT Rule-based Alignment <!-- Provide a quick summary of what the model is/does. --> This model is trained to translate an open triple (initially from Quasimodo) into a closed triple that uses relationships from ConceptNet. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by: Julien Romero - **Model type: GPT2 - **Language(s) (NLP): English - **Finetuned from model: gpt2-large ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository: [https://github.com/Aunsiels/GenT](https://github.com/Aunsiels/GenT) - **Paper [optional]: [https://arxiv.org/pdf/2306.12766.pdf](https://arxiv.org/pdf/2306.12766.pdf) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> We observed good results by using a beam search decoding. Other decoding methods might be less adapted. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> You must give the open triple with subject, object, and predicate separated by a tabulation and then followed by [SEP]. Examples: ``` fish lives in ocean[SEP] elephant be killed in africa[SEP] doctor write prescription[SEP] ``` ### From Subject/Subject-Predicate <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> It is also possible to give a subject or a subject-predicate to generate a knowledge base directly. The output must be parsed correctly in this case. Examples: ``` fish elephant capable of doctor at location ``` ### From Text <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> When used with text as input, this model can behave like a relation extractor, although it was not trained on this task. Examples: ``` Some air pollutants fall to earth in the form of acid rain.[SEP] Elon Musk Races to Secure Financing for Twitter Bid.[SEP] ``` ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @InProceedings{10.1007/978-3-031-47240-4_20, author="Romero, Julien and Razniewski, Simon", editor="Payne, Terry R. and Presutti, Valentina and Qi, Guilin and Poveda-Villal{\'o}n, Mar{\'i}a and Stoilos, Giorgos and Hollink, Laura and Kaoudi, Zoi and Cheng, Gong and Li, Juanzi", title="Mapping and Cleaning Open Commonsense Knowledge Bases with Generative Translation", booktitle="The Semantic Web -- ISWC 2023", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="368--387", abstract="Structured knowledge bases (KBs) are the backbone of many knowledge-intensive applications, and their automated construction has received considerable attention. In particular, open information extraction (OpenIE) is often used to induce structure from a text. However, although it allows high recall, the extracted knowledge tends to inherit noise from the sources and the OpenIE algorithm. Besides, OpenIE tuples contain an open-ended, non-canonicalized set of relations, making the extracted knowledge's downstream exploitation harder. In this paper, we study the problem of mapping an open KB into the fixed schema of an existing KB, specifically for the case of commonsense knowledge. We propose approaching the problem by generative translation, i.e., by training a language model to generate fixed-schema assertions from open ones. Experiments show that this approach occupies a sweet spot between traditional manual, rule-based, or classification-based canonicalization and purely generative KB construction like COMET. Moreover, it produces higher mapping accuracy than the former while avoiding the association-based noise of the latter. Code and data are available. (https://github.com/Aunsiels/GenT, julienromero.fr/data/GenT)", isbn="978-3-031-47240-4" }
Mohammed-Altaf/instruct-finetuned-medical-20b-adapters
Mohammed-Altaf
2023-11-21T11:56:45Z
2
0
peft
[ "peft", "safetensors", "medical", "en", "dataset:Mohammed-Altaf/medical-instruction-120k", "base_model:EleutherAI/gpt-neox-20b", "base_model:adapter:EleutherAI/gpt-neox-20b", "license:mit", "region:us" ]
null
2023-11-17T13:52:58Z
--- library_name: peft base_model: EleutherAI/gpt-neox-20b license: mit datasets: - Mohammed-Altaf/medical-instruction-120k language: - en tags: - medical --- ## what this repo is about? * This repository containes the Qlora Adapters for the GPT-NEOX-20B model, the data set comprised of about 120K examples and the response is quite good ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
navazpv/finetuning-sentiment-model-3000-samples
navazpv
2023-11-21T11:49:04Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-21T11:43:38Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8712871287128714 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3195 - Accuracy: 0.87 - F1: 0.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
ForRiver/wizardmath-lora-7b-512-1
ForRiver
2023-11-21T11:47:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:WizardLMTeam/WizardMath-7B-V1.0", "base_model:adapter:WizardLMTeam/WizardMath-7B-V1.0", "region:us" ]
null
2023-11-21T11:47:06Z
--- library_name: peft base_model: WizardLM/WizardMath-7B-V1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.3.dev0
kazandaev/m2m100_1.2B
kazandaev
2023-11-21T11:46:37Z
17
0
transformers
[ "transformers", "pytorch", "safetensors", "m2m_100", "text2text-generation", "translation", "generated_from_trainer", "dataset:wmt16", "base_model:facebook/m2m100_1.2B", "base_model:finetune:facebook/m2m100_1.2B", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-10-01T22:54:41Z
--- license: mit base_model: facebook/m2m100_1.2B tags: - translation - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: m2m100_1.2B results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 config: ru-en split: validation args: ru-en metrics: - name: Bleu type: bleu value: 33.3632 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # m2m100_1.2B This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 0.8189 - Bleu: 33.3632 - Gen Len: 36.176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7163 | 1.0 | 47790 | 0.8189 | 33.3632 | 36.176 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.14.1
Jayicebear/Bart_cnn_multinews_fintuned
Jayicebear
2023-11-21T11:46:21Z
6
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:multi_news", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-21T11:28:29Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer datasets: - multi_news model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset. It achieves the following results on the evaluation set: - Loss: 3.0719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.1682 | 0.89 | 10000 | 3.1293 | | 2.8049 | 1.78 | 20000 | 3.0567 | | 2.4622 | 2.67 | 30000 | 3.0719 | ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
SuperDan/Reinforce-pg-PixelCopter
SuperDan
2023-11-21T11:45:54Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T11:42:46Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pg-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.90 +/- 32.87 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
IlluminatiPudding/a2c-PandaPickAndPlaceDense-v3-v13.3
IlluminatiPudding
2023-11-21T11:44:55Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlaceDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T11:39:41Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlaceDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlaceDense-v3 type: PandaPickAndPlaceDense-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlaceDense-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlaceDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Kiddyz/testllm-c2
Kiddyz
2023-11-21T11:37:43Z
1,477
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T10:33:44Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation --- Based on mistral 7b, Just for test. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: Grouped-Query Attention Sliding-Window Attention Byte-fallback BPE tokenizer
HerczogC/final-nlp-uba-falcon-7b-instruct
HerczogC
2023-11-21T11:30:53Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:adapter:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2023-11-20T18:06:18Z
--- library_name: peft base_model: ybelkada/falcon-7b-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
IlluminatiPudding/a2c-PandaPickAndPlaceDense-v3_v20
IlluminatiPudding
2023-11-21T11:22:56Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlaceDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T11:17:14Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlaceDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlaceDense-v3 type: PandaPickAndPlaceDense-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlaceDense-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlaceDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
HamdanXI/t5-small-paradetox-1Token-split-masked
HamdanXI
2023-11-21T11:16:48Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-21T11:06:07Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: t5-small-paradetox-1Token-split-masked results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-paradetox-1Token-split-masked This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
mljn/FRA_party_tweets_climate
mljn
2023-11-21T11:15:31Z
5
0
transformers
[ "transformers", "safetensors", "camembert", "text-classification", "generated_from_trainer", "base_model:almanach/camembert-base", "base_model:finetune:almanach/camembert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-21T11:15:00Z
--- license: mit base_model: camembert-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: FRA_party_tweets_climate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FRA_party_tweets_climate This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0826 - Accuracy: 0.9857 - F1 Macro: 0.9853 - Accuracy Balanced: 0.9847 - F1 Micro: 0.9857 - Precision Macro: 0.9858 - Recall Macro: 0.9847 - Precision Micro: 0.9857 - Recall Micro: 0.9857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 80 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Accuracy Balanced | F1 Micro | Precision Macro | Recall Macro | Precision Micro | Recall Micro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:| | 0.2428 | 1.0 | 628 | 0.0792 | 0.9841 | 0.9836 | 0.9831 | 0.9841 | 0.9842 | 0.9831 | 0.9841 | 0.9841 | | 0.058 | 2.0 | 1256 | 0.0925 | 0.9809 | 0.9804 | 0.9804 | 0.9809 | 0.9804 | 0.9804 | 0.9809 | 0.9809 | | 0.0429 | 3.0 | 1884 | 0.0785 | 0.9857 | 0.9852 | 0.9846 | 0.9857 | 0.9859 | 0.9846 | 0.9857 | 0.9857 | | 0.024 | 4.0 | 2512 | 0.0829 | 0.9857 | 0.9853 | 0.9847 | 0.9857 | 0.9858 | 0.9847 | 0.9857 | 0.9857 | | 0.0202 | 5.0 | 3140 | 0.0826 | 0.9857 | 0.9853 | 0.9847 | 0.9857 | 0.9858 | 0.9847 | 0.9857 | 0.9857 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
TheBloke/Qwen-14B-Chat-AWQ
TheBloke
2023-11-21T11:10:51Z
44
11
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "zh", "en", "arxiv:2309.16609", "arxiv:2305.08322", "arxiv:2009.03300", "arxiv:2305.05280", "arxiv:2210.03629", "base_model:Qwen/Qwen-14B-Chat", "base_model:quantized:Qwen/Qwen-14B-Chat", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-21T09:57:40Z
--- base_model: Qwen/Qwen-14B-Chat inference: false language: - zh - en model_creator: Qwen model_name: Qwen 14B Chat model_type: qwen pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - qwen --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Qwen 14B Chat - AWQ - Model creator: [Qwen](https://huggingface.co/Qwen) - Original model: [Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) <!-- description start --> ## Description This repo contains AWQ model files for [Qwen's Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Qwen-14B-Chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Qwen-14B-Chat-GPTQ) * [Qwen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Qwen/Qwen-14B-Chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Qwen-14B-Chat-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 9.67 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Qwen-14B-Chat-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Qwen-14B-Chat-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Qwen-14B-Chat-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Qwen-14B-Chat-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Qwen-14B-Chat-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Qwen-14B-Chat-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Qwen's Qwen 14B Chat # Qwen-14B-Chat <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/> <p> <br> <p align="center"> 🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>&nbsp&nbsp | &nbsp&nbsp🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a> <br> <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp DingTalk (钉钉) &nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp </p> <br><br> ## 介绍(Introduction) **通义千问-14B(Qwen-14B)**是阿里云研发的通义千问大模型系列的140亿参数规模的模型。Qwen-14B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-14B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-14B-Chat。本仓库为Qwen-14B-Chat的仓库。 如果您想了解更多关于通义千问-14B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。 **Qwen-14B** is the 14B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-14B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-14B, we release Qwen-14B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-14B-Chat. For more details about the open-source model of Qwen-14B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository. <br> ## 要求(Requirements) * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项) * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) <br> ## 依赖项(Dependency) 运行Qwen-14B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库 To run Qwen-14B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed ``` 另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。 In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage. ```bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # 下方安装可选,安装可能比较缓慢。 # pip install csrc/layer_norm # pip install csrc/rotary ``` <br> ## 快速使用(Quickstart) 下面我们展示了一个使用Qwen-14B-Chat模型,进行多轮对话交互的样例: We show an example of multi-turn interaction with Qwen-14B-Chat in the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # Note: The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-14B-Chat", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True).eval() # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this. # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-14B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参 # 第一轮对话 1st dialogue turn response, history = model.chat(tokenizer, "你好", history=None) print(response) # 你好!很高兴为你提供帮助。 # 第二轮对话 2nd dialogue turn response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history) print(response) # 这是一个关于一个年轻人奋斗创业最终取得成功的故事。 # 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。 # 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。 # 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。 # 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。 # 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。 # 第三轮对话 3rd dialogue turn response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history) print(response) # 《奋斗创业:一个年轻人的成功之路》 ``` 关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。 For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information. <br> ## 量化 (Quantization) ### 用法 (Usage) **请注意:我们更新量化方案为基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化,提供Qwen-14B-Chat的Int4量化模型[点击这里](https://huggingface.co/Qwen/Qwen-14B-Chat-Int4)。相比此前方案,该方案在模型评测效果几乎无损,且存储需求更低,推理速度更优。** **Note: we provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-14B-Chat [Click here](https://huggingface.co/Qwen/Qwen-14B-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed, in comparison with the previous solution.** 以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包: Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements of auto-gptq (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages: ```bash pip install auto-gptq optimum ``` 如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。 随后即可使用和上述一致的用法调用量化模型: If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel. Then you can load the quantized model easily and run inference as same as usual: ```python model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen-14B-Chat-Int4", device_map="auto", trust_remote_code=True ).eval() response, history = model.chat(tokenizer, "你好", history=None) ``` ### 效果评测 我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示: We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below: | Quantization | MMLU | CEval (val) | GSM8K | Humaneval | |--------------|:----:|:-----------:|:-----:|:---------:| | BF16 | 64.6 | 69.8 | 60.1 | 43.9 | | Int8 | 63.6 | 68.6 | 60.0 | 48.2 | | Int4 | 63.3 | 69.0 | 59.8 | 45.7 | ### 推理速度 (Inference Speed) 我们测算了不同精度模型以及不同FlashAttn库版本下模型生成2048和8192个token的平均推理速度。如图所示: We measured the average inference speed of generating 2048 and 8192 tokens with different quantization levels and versions of flash-attention, respectively. | Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) | | ------------- | :-------: | :------------------:| :------------------:| | BF16 | v2 | 32.88 | 24.87 | | Int8 | v2 | 29.28 | 24.22 | | Int4 | v2 | 38.72 | 27.33 | | BF16 | v1 | 32.76 | 28.89 | | Int8 | v1 | 28.31 | 23.87 | | Int4 | v1 | 37.81 | 26.46 | | BF16 | Disabled | 29.32 | 22.91 | | Int8 | Disabled | 31.12 | 24.60 | | Int4 | Disabled | 37.65 | 26.00 | 具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.8。推理速度是生成8192个token的速度均值。 In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.8. The inference speed is averaged over the generated 8192 tokens. 注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前``AutoModelForCausalLM.from_pretrained``载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。 Note: The generation speed of the Int4/Int8 models mentioned above is provided by the autogptq library. The current speed of the model loaded using "AutoModelForCausalLM.from_pretrained" will be approximately 20% slower. We have reported this issue to the HuggingFace team and will update it promptly if a solution is available. ### 显存使用 (GPU Memory Usage) 我们还测算了不同模型精度编码2048个token及生成8192个token的峰值显存占用情况。(显存消耗在是否使用FlashAttn的情况下均类似。)结果如下所示: We also profile the peak GPU memory usage for encoding 2048 tokens as context (and generating single token) and generating 8192 tokens (with single token as context) under different quantization levels, respectively. (The GPU memory usage is similar when using flash-attention or not.)The results are shown below. | Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens | | ------------------ | :---------------------------------: | :-----------------------------------: | | BF16 | 30.15GB | 38.94GB | | Int8 | 18.81GB | 27.54GB | | Int4 | 13.01GB | 21.79GB | 上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。 The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py). <br> ## 模型细节(Model) 与Qwen-14B预训练模型相同,Qwen-14B-Chat模型规模基本情况如下所示 The details of the model architecture of Qwen-14B-Chat are listed as follows | Hyperparameter | Value | |:----------------|:------:| | n_layers | 40 | | n_heads | 40 | | d_model | 5120 | | vocab size | 151851 | | sequence length | 2048 | 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法, 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-14B-Chat使用了约15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。 For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration). For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-14B-Chat uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization. <br> ## 评测效果(Evaluation) 对于Qwen-14B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-14B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。 提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。 For Qwen-14B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage. Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible. ### 中文评测(Chinese Evaluation) #### C-Eval 在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-14B-Chat模型的0-shot & 5-shot准确率 We demonstrate the 0-shot & 5-shot accuracy of Qwen-14B-Chat on C-Eval validation set | Model | Avg. Acc. | |:--------------------------------:|:---------:| | LLaMA2-7B-Chat | 31.9 | | LLaMA2-13B-Chat | 36.2 | | LLaMA2-70B-Chat | 44.3 | | ChatGLM2-6B-Chat | 52.6 | | InternLM-7B-Chat | 53.6 | | Baichuan2-7B-Chat | 55.6 | | Baichuan2-13B-Chat | 56.7 | | Qwen-7B-Chat (original) (0-shot) | 54.2 | | **Qwen-7B-Chat (0-shot)** | 59.7 | | **Qwen-7B-Chat (5-shot)** | 59.3 | | **Qwen-14B-Chat (0-shot)** | 69.8 | | **Qwen-14B-Chat (5-shot)** | **71.7** | C-Eval测试集上,Qwen-14B-Chat模型的zero-shot准确率结果如下: The zero-shot accuracy of Qwen-14B-Chat on C-Eval testing set is provided below: | Model | Avg. | STEM | Social Sciences | Humanities | Others | | :---------------------- | :------: | :--: | :-------------: | :--------: | :----: | | Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 | | Chinese-Alpaca-2-7B | 40.3 | - | - | - | - | | ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 | | Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 | | Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 | | **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 | | **Qwen-14B-Chat** | **69.1** | 65.1 | 80.9 | 71.2 | 63.4 | 在14B规模模型上,经过人类指令对齐的Qwen-14B-Chat模型,准确率在同类相近规模模型中仍然处于前列。 Compared with other pretrained models with comparable model size, the human-aligned Qwen-14B-Chat performs well in C-Eval accuracy. ### 英文评测(English Evaluation) #### MMLU [MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-14B-Chat模型的 0-shot & 5-shot 准确率如下,效果同样在同类对齐模型中同样表现较优。 The 0-shot & 5-shot accuracy of Qwen-14B-Chat on MMLU is provided below. The performance of Qwen-14B-Chat still on the top between other human-aligned models with comparable size. | Model | Avg. Acc. | |:--------------------------------:|:---------:| | ChatGLM2-6B-Chat | 46.0 | | LLaMA2-7B-Chat | 46.2 | | InternLM-7B-Chat | 51.1 | | Baichuan2-7B-Chat | 52.9 | | LLaMA2-13B-Chat | 54.6 | | Baichuan2-13B-Chat | 57.3 | | LLaMA2-70B-Chat | 63.8 | | Qwen-7B-Chat (original) (0-shot) | 53.9 | | **Qwen-7B-Chat (0-shot)** | 55.8 | | **Qwen-7B-Chat (5-shot)** | 57.0 | | **Qwen-14B-Chat (0-shot)** | 64.6 | | **Qwen-14B-Chat (5-shot)** | **66.5** | ### 代码评测(Coding Evaluation) Qwen-14B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下 The zero-shot Pass@1 of Qwen-14B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below | Model | Pass@1 | |:-----------------------:|:--------:| | ChatGLM2-6B-Chat | 11.0 | | LLaMA2-7B-Chat | 12.2 | | InternLM-7B-Chat | 14.6 | | Baichuan2-7B-Chat | 13.4 | | LLaMA2-13B-Chat | 18.9 | | Baichuan2-13B-Chat | 17.7 | | LLaMA2-70B-Chat | 32.3 | | Qwen-7B-Chat (original) | 24.4 | | **Qwen-7B-Chat** | 37.2 | | **Qwen-14B-Chat** | **43.9** | ### 数学评测(Mathematics Evaluation) 在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-14B-Chat的准确率结果如下 The accuracy of Qwen-14B-Chat on GSM8K is shown below | Model | Acc. | |:--------------------------------:|:--------:| | LLaMA2-7B-Chat | 26.3 | | ChatGLM2-6B-Chat | 28.8 | | Baichuan2-7B-Chat | 32.8 | | InternLM-7B-Chat | 33.0 | | LLaMA2-13B-Chat | 37.1 | | Baichuan2-13B-Chat | 55.3 | | LLaMA2-70B-Chat | 59.3 | | Qwen-7B-Chat (original) (0-shot) | 41.1 | | **Qwen-7B-Chat (0-shot)** | 50.3 | | **Qwen-7B-Chat (8-shot)** | 54.1 | | **Qwen-14B-Chat (0-shot)** | **60.1** | | **Qwen-14B-Chat (8-shot)** | 59.3 | ### 长序列评测(Long-Context Understanding) 通过NTK插值,LogN注意力缩放可以扩展Qwen-14B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-14B-Chat的Rouge-L结果如下: **(若要启用这些技巧,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)** We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-14B-Chat. The Rouge-L results of Qwen-14B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below: **(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)** | Model | VCSUM (zh) | |:------------------|:----------:| | GPT-3.5-Turbo-16k | 16.0 | | LLama2-7B-Chat | 0.2 | | InternLM-7B-Chat | 13.0 | | ChatGLM2-6B-Chat | 16.3 | | **Qwen-14B-Chat** | **17.3** | ### 工具使用能力的评测(Tool Usage) #### ReAct Prompting 千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下: Qwen-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-Chat's performance is as follows: <table> <tr> <th colspan="4" align="center">Chinese Tool-Use Benchmark</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection (Acc.↑)</th><th align="center">Tool Input (Rouge-L↑)</th><th align="center">False Positive Error↓</th> </tr> <tr> <td>GPT-4</td><td align="center">95%</td><td align="center">0.90</td><td align="center">15.0%</td> </tr> <tr> <td>GPT-3.5</td><td align="center">85%</td><td align="center">0.88</td><td align="center">75.0%</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">98%</td><td align="center">0.91</td><td align="center">7.3%</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">98%</td><td align="center">0.93</td><td align="center">2.4%</td> </tr> </table> > 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。 > The plugins that appear in the evaluation set do not appear in the training set of Qwen. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query. ![](assets/react_showcase_001.png) ![](assets/react_showcase_002.png) #### Code Interpreter 为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的[评测基准](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark)。 我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好: To assess Qwen's ability to use the Python Code Interpreter for tasks such as mathematical problem solving, data visualization, and other general-purpose tasks such as file handling and web scraping, we have created and open-sourced a benchmark specifically designed for evaluating these capabilities. You can find the benchmark at this [link](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark). We have observed that Qwen performs well in terms of code executability and result accuracy when generating code: <table> <tr> <th colspan="4" align="center">Executable Rate of Generated Code (%)</th> </tr> <tr> <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization↑</th><th align="center">General↑</th> </tr> <tr> <td>GPT-4</td><td align="center">91.9</td><td align="center">85.9</td><td align="center">82.8</td> </tr> <tr> <td>GPT-3.5</td><td align="center">89.2</td><td align="center">65.0</td><td align="center">74.1</td> </tr> <tr> <td>LLaMA2-7B-Chat</td> <td align="center">41.9</td> <td align="center">33.1</td> <td align="center">24.1 </td> </tr> <tr> <td>LLaMA2-13B-Chat</td> <td align="center">50.0</td> <td align="center">40.5</td> <td align="center">48.3 </td> </tr> <tr> <td>CodeLLaMA-7B-Instruct</td> <td align="center">85.1</td> <td align="center">54.0</td> <td align="center">70.7 </td> </tr> <tr> <td>CodeLLaMA-13B-Instruct</td> <td align="center">93.2</td> <td align="center">55.8</td> <td align="center">74.1 </td> </tr> <tr> <td>InternLM-7B-Chat-v1.1</td> <td align="center">78.4</td> <td align="center">44.2</td> <td align="center">62.1 </td> </tr> <tr> <td>InternLM-20B-Chat</td> <td align="center">70.3</td> <td align="center">44.2</td> <td align="center">65.5 </td> </tr> <tr> <td>Qwen-7B-Chat</td> <td align="center">82.4</td> <td align="center">64.4</td> <td align="center">67.2 </td> </tr> <tr> <td>Qwen-14B-Chat</td> <td align="center">89.2</td> <td align="center">84.1</td> <td align="center">65.5</td> </tr> </table> <table> <tr> <th colspan="4" align="center">Accuracy of Code Execution Results (%)</th> </tr> <tr> <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization-Hard↑</th><th align="center">Visualization-Easy↑</th> </tr> <tr> <td>GPT-4</td><td align="center">82.8</td><td align="center">66.7</td><td align="center">60.8</td> </tr> <tr> <td>GPT-3.5</td><td align="center">47.3</td><td align="center">33.3</td><td align="center">55.7</td> </tr> <tr> <td>LLaMA2-7B-Chat</td> <td align="center">3.9</td> <td align="center">14.3</td> <td align="center">39.2 </td> </tr> <tr> <td>LLaMA2-13B-Chat</td> <td align="center">8.3</td> <td align="center">8.3</td> <td align="center">40.5 </td> </tr> <tr> <td>CodeLLaMA-7B-Instruct</td> <td align="center">14.3</td> <td align="center">26.2</td> <td align="center">60.8 </td> </tr> <tr> <td>CodeLLaMA-13B-Instruct</td> <td align="center">28.2</td> <td align="center">27.4</td> <td align="center">62.0 </td> </tr> <tr> <td>InternLM-7B-Chat-v1.1</td> <td align="center">28.5</td> <td align="center">4.8</td> <td align="center">40.5 </td> </tr> <tr> <td>InternLM-20B-Chat</td> <td align="center">34.6</td> <td align="center">21.4</td> <td align="center">45.6 </td> </tr> <tr> <td>Qwen-7B-Chat</td> <td align="center">41.9</td> <td align="center">40.5</td> <td align="center">54.4 </td> </tr> <tr> <td>Qwen-14B-Chat</td> <td align="center">58.4</td> <td align="center">53.6</td> <td align="center">59.5</td> </tr> </table> <p align="center"> <br> <img src="assets/code_interpreter_showcase_001.jpg" /> <br> <p> #### Huggingface Agent 千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下: Qwen-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows: <table> <tr> <th colspan="4" align="center">HuggingFace Agent Benchmark- Run Mode</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th> </tr> <tr> <td>GPT-4</td><td align="center">100</td><td align="center">100</td><td align="center">97.4</td> </tr> <tr> <td>GPT-3.5</td><td align="center">95.4</td><td align="center">96.3</td><td align="center">87.0</td> </tr> <tr> <td>StarCoder-Base-15B</td><td align="center">86.1</td><td align="center">87.0</td><td align="center">68.9</td> </tr> <tr> <td>StarCoder-15B</td><td align="center">87.0</td><td align="center">88.0</td><td align="center">68.9</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">87.0</td><td align="center">87.0</td><td align="center">71.5</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">93.5</td><td align="center">94.4</td><td align="center">87.0</td> </tr> </table> <table> <tr> <th colspan="4" align="center">HuggingFace Agent Benchmark - Chat Mode</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th> </tr> <tr> <td>GPT-4</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">98.5</td> </tr> <tr> <td>GPT-3.5</td><td align="center">97.3</td><td align="center">96.8</td><td align="center">89.6</td> </tr> <tr> <td>StarCoder-Base-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">91.1</td> </tr> <tr> <td>StarCoder-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">89.6</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">94.7</td><td align="center">94.7</td><td align="center">85.1</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">95.5</td> </tr> </table> <br> ## FAQ 如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。 If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue. <br> ## 引用 (Citation) 如果你觉得我们的工作对你有帮助,欢迎引用! If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ``` <br> ## 使用协议(License Agreement) 我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/LICENSE)了解具体的开源协议细节。如需商用,欢迎填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。 Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/LICENSE) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply. <br> ## 联系我们(Contact Us) 如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件([email protected])联系我们。 If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
TheBloke/Qwen-7B-Chat-AWQ
TheBloke
2023-11-21T11:10:39Z
18
8
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "zh", "en", "arxiv:2309.16609", "arxiv:2305.08322", "arxiv:2009.03300", "arxiv:2305.05280", "arxiv:2210.03629", "base_model:Qwen/Qwen-7B-Chat", "base_model:quantized:Qwen/Qwen-7B-Chat", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-21T09:54:47Z
--- base_model: Qwen/Qwen-7B-Chat inference: false language: - zh - en model_creator: Qwen model_name: Qwen 7B Chat model_type: qwen pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - qwen --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Qwen 7B Chat - AWQ - Model creator: [Qwen](https://huggingface.co/Qwen) - Original model: [Qwen 7B Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) <!-- description start --> ## Description This repo contains AWQ model files for [Qwen's Qwen 7B Chat](https://huggingface.co/Qwen/Qwen-7B-Chat). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Qwen-7B-Chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Qwen-7B-Chat-GPTQ) * [Qwen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Qwen/Qwen-7B-Chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Qwen-7B-Chat-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 8192 | 5.86 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Qwen-7B-Chat-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Qwen-7B-Chat-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Qwen-7B-Chat-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Qwen-7B-Chat-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Qwen-7B-Chat-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Qwen-7B-Chat-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Qwen's Qwen 7B Chat # Qwen-7B-Chat <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/> <p> <br> <p align="center"> 🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>&nbsp&nbsp | &nbsp&nbsp🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a> <br> <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp DingTalk (钉钉) &nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp </p> <br><br> ## 介绍(Introduction) **通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。相较于最初开源的Qwen-7B模型,我们现已将预训练模型和Chat模型更新到效果更优的版本。本仓库为Qwen-7B-Chat的仓库。 如果您想了解更多关于通义千问-7B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。 **Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. Now we have updated both our pretrained and chat models with better performances. This repository is the one for Qwen-7B-Chat. For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository. <br> ## 要求(Requirements) * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项) * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) <br> ## 依赖项(Dependency) 运行Qwen-7B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库 To run Qwen-7B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed ``` 另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。 In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage. ```bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # 下方安装可选,安装可能比较缓慢。 # pip install csrc/layer_norm # pip install csrc/rotary ``` <br> ## 快速使用(Quickstart) 下面我们展示了一个使用Qwen-7B-Chat模型,进行多轮对话交互的样例: We show an example of multi-turn interaction with Qwen-7B-Chat in the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # Note: The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval() # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this. # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参 # 第一轮对话 1st dialogue turn response, history = model.chat(tokenizer, "你好", history=None) print(response) # 你好!很高兴为你提供帮助。 # 第二轮对话 2nd dialogue turn response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history) print(response) # 这是一个关于一个年轻人奋斗创业最终取得成功的故事。 # 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。 # 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。 # 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。 # 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。 # 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。 # 第三轮对话 3rd dialogue turn response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history) print(response) # 《奋斗创业:一个年轻人的成功之路》 ``` 关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。 For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information. <br> ## Tokenizer > 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。 基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。 Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md). <br> ## 量化 (Quantization) ### 用法 (Usage) **请注意:我们更新量化方案为基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化,提供Qwen-7B-Chat的Int4量化模型[点击这里](https://huggingface.co/Qwen/Qwen-7B-Chat-Int4)。相比此前方案,该方案在模型评测效果几乎无损,且存储需求更低,推理速度更优。** **Note: we provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-7B-Chat [Click here](https://huggingface.co/Qwen/Qwen-7B-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed, in comparison with the previous solution.** 以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包: Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements of auto-gptq (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages: ```bash pip install auto-gptq optimum ``` 如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。 随后即可使用和上述一致的用法调用量化模型: If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel. Then you can load the quantized model easily and run inference as same as usual: ```python model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen-7B-Chat-Int4", device_map="auto", trust_remote_code=True ).eval() response, history = model.chat(tokenizer, "你好", history=None) ``` ### 效果评测 我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示: We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below: | Quantization | MMLU | CEval (val) | GSM8K | Humaneval | | ------------- | :--------: | :----------: | :----: | :--------: | | BF16 | 55.8 | 59.7 | 50.3 | 37.2 | | Int8 | 55.4 | 59.4 | 48.3 | 34.8 | | Int4 | 55.1 | 59.2 | 49.7 | 29.9 | ### 推理速度 (Inference Speed) 我们测算了不同精度模型以及不同FlashAttn库版本下模型生成2048和8192个token的平均推理速度。如图所示: We measured the average inference speed of generating 2048 and 8192 tokens with different quantization levels and versions of flash-attention, respectively. | Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) | | ------------- | :-------: | :------------------:| :------------------:| | BF16 | v2 | 40.93 | 36.14 | | Int8 | v2 | 37.47 | 32.54 | | Int4 | v2 | 50.09 | 38.61 | | BF16 | v1 | 40.75 | 35.34 | | Int8 | v1 | 37.51 | 32.39 | | Int4 | v1 | 45.98 | 36.47 | | BF16 | Disabled | 37.55 | 33.56 | | Int8 | Disabled | 37.84 | 32.65 | | Int4 | Disabled | 48.12 | 36.70 | 具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.8。推理速度是生成8192个token的速度均值。 In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.8. The inference speed is averaged over the generated 8192 tokens. 注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前``AutoModelForCausalLM.from_pretrained``载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。 Note: The generation speed of the Int4/Int8 models mentioned above is provided by the autogptq library. The current speed of the model loaded using "AutoModelForCausalLM.from_pretrained" will be approximately 20% slower. We have reported this issue to the HuggingFace team and will update it promptly if a solution is available. ### 显存使用 (GPU Memory Usage) 我们还测算了不同模型精度编码2048个token及生成8192个token的峰值显存占用情况。(显存消耗在是否使用FlashAttn的情况下均类似。)结果如下所示: We also profile the peak GPU memory usage for encoding 2048 tokens as context (and generating single token) and generating 8192 tokens (with single token as context) under different quantization levels, respectively. (The GPU memory usage is similar when using flash-attention or not.)The results are shown below. | Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens | | ------------------ | :---------------------------------: | :-----------------------------------: | | BF16 | 16.99GB | 22.53GB | | Int8 | 11.20GB | 16.62GB | | Int4 | 8.21GB | 13.63GB | 上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。 The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py). <br> ## 模型细节(Model) 与Qwen-7B预训练模型相同,Qwen-7B-Chat模型规模基本情况如下所示: The details of the model architecture of Qwen-7B-Chat are listed as follows: | Hyperparameter | Value | |:----------------|:------:| | n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 151851 | | sequence length | 8192 | 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法, 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B-Chat使用了约15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。 For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration). For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B-Chat uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization. <br> ## 评测效果(Evaluation) 对于Qwen-7B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-7B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。 提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。 For Qwen-7B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage. Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible. ### 中文评测(Chinese Evaluation) #### C-Eval 在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-7B-Chat模型的0-shot & 5-shot准确率 We demonstrate the 0-shot & 5-shot accuracy of Qwen-7B-Chat on C-Eval validation set | Model | Avg. Acc. | |:--------------------------------:|:---------:| | LLaMA2-7B-Chat | 31.9 | | LLaMA2-13B-Chat | 36.2 | | LLaMA2-70B-Chat | 44.3 | | ChatGLM2-6B-Chat | 52.6 | | InternLM-7B-Chat | 53.6 | | Baichuan2-7B-Chat | 55.6 | | Baichuan2-13B-Chat | 56.7 | | Qwen-7B-Chat (original) (0-shot) | 54.2 | | **Qwen-7B-Chat (0-shot)** | 59.7 | | **Qwen-7B-Chat (5-shot)** | 59.3 | | **Qwen-14B-Chat (0-shot)** | 69.8 | | **Qwen-14B-Chat (5-shot)** | **71.7** | C-Eval测试集上,Qwen-7B-Chat模型的zero-shot准确率结果如下: The zero-shot accuracy of Qwen-7B-Chat on C-Eval testing set is provided below: | Model | Avg. | STEM | Social Sciences | Humanities | Others | | :---------------------- | :------: | :--: | :-------------: | :--------: | :----: | | Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 | | Chinese-Alpaca-2-7B | 40.3 | - | - | - | - | | ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 | | Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 | | Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 | | **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 | | **Qwen-14B-Chat** | **69.1** | 65.1 | 80.9 | 71.2 | 63.4 | 在7B规模模型上,经过人类指令对齐的Qwen-7B-Chat模型,准确率在同类相近规模模型中仍然处于前列。 Compared with other pretrained models with comparable model size, the human-aligned Qwen-7B-Chat performs well in C-Eval accuracy. ### 英文评测(English Evaluation) #### MMLU [MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的 0-shot & 5-shot 准确率如下,效果同样在同类对齐模型中同样表现较优。 The 0-shot & 5-shot accuracy of Qwen-7B-Chat on MMLU is provided below. The performance of Qwen-7B-Chat still on the top between other human-aligned models with comparable size. | Model | Avg. Acc. | |:--------------------------------:|:---------:| | ChatGLM2-6B-Chat | 46.0 | | LLaMA2-7B-Chat | 46.2 | | InternLM-7B-Chat | 51.1 | | Baichuan2-7B-Chat | 52.9 | | LLaMA2-13B-Chat | 54.6 | | Baichuan2-13B-Chat | 57.3 | | LLaMA2-70B-Chat | 63.8 | | Qwen-7B-Chat (original) (0-shot) | 53.9 | | **Qwen-7B-Chat (0-shot)** | 55.8 | | **Qwen-7B-Chat (5-shot)** | 57.0 | | **Qwen-14B-Chat (0-shot)** | 64.6 | | **Qwen-14B-Chat (5-shot)** | **66.5** | ### 代码评测(Coding Evaluation) Qwen-7B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下 The zero-shot Pass@1 of Qwen-7B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below | Model | Pass@1 | |:-----------------------:|:--------:| | ChatGLM2-6B-Chat | 11.0 | | LLaMA2-7B-Chat | 12.2 | | Baichuan2-7B-Chat | 13.4 | | InternLM-7B-Chat | 14.6 | | Baichuan2-13B-Chat | 17.7 | | LLaMA2-13B-Chat | 18.9 | | LLaMA2-70B-Chat | 32.3 | | Qwen-7B-Chat (original) | 24.4 | | **Qwen-7B-Chat** | 37.2 | | **Qwen-14B-Chat** | **43.9** | ### 数学评测(Mathematics Evaluation) 在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-7B-Chat的准确率结果如下 The accuracy of Qwen-7B-Chat on GSM8K is shown below | Model | Acc. | |:------------------------------------:|:--------:| | LLaMA2-7B-Chat | 26.3 | | ChatGLM2-6B-Chat | 28.8 | | Baichuan2-7B-Chat | 32.8 | | InternLM-7B-Chat | 33.0 | | LLaMA2-13B-Chat | 37.1 | | Baichuan2-13B-Chat | 55.3 | | LLaMA2-70B-Chat | 59.3 | | **Qwen-7B-Chat (original) (0-shot)** | 41.1 | | **Qwen-7B-Chat (0-shot)** | 50.3 | | **Qwen-7B-Chat (8-shot)** | 54.1 | | **Qwen-14B-Chat (0-shot)** | **60.1** | | **Qwen-14B-Chat (8-shot)** | 59.3 | ### 长序列评测(Long-Context Understanding) 通过NTK插值,LogN注意力缩放可以扩展Qwen-7B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-7B-Chat的Rouge-L结果如下: **(若要启用这些技巧,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)** We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-7B-Chat. The Rouge-L results of Qwen-7B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below: **(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)** | Model | VCSUM (zh) | |:------------------|:----------:| | GPT-3.5-Turbo-16k | 16.0 | | LLama2-7B-Chat | 0.2 | | InternLM-7B-Chat | 13.0 | | ChatGLM2-6B-Chat | 16.3 | | **Qwen-7B-Chat** | **16.6** | ### 工具使用能力的评测(Tool Usage) #### ReAct Prompting 千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下: Qwen-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-Chat's performance is as follows: <table> <tr> <th colspan="4" align="center">Chinese Tool-Use Benchmark</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection (Acc.↑)</th><th align="center">Tool Input (Rouge-L↑)</th><th align="center">False Positive Error↓</th> </tr> <tr> <td>GPT-4</td><td align="center">95%</td><td align="center">0.90</td><td align="center">15.0%</td> </tr> <tr> <td>GPT-3.5</td><td align="center">85%</td><td align="center">0.88</td><td align="center">75.0%</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">98%</td><td align="center">0.91</td><td align="center">7.3%</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">98%</td><td align="center">0.93</td><td align="center">2.4%</td> </tr> </table> > 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。 > The plugins that appear in the evaluation set do not appear in the training set of Qwen. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query. ![](assets/react_showcase_001.png) ![](assets/react_showcase_002.png) #### Code Interpreter 为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的[评测基准](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark)。 我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好: To assess Qwen's ability to use the Python Code Interpreter for tasks such as mathematical problem solving, data visualization, and other general-purpose tasks such as file handling and web scraping, we have created and open-sourced a benchmark specifically designed for evaluating these capabilities. You can find the benchmark at this [link](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark). We have observed that Qwen performs well in terms of code executability and result accuracy when generating code: <table> <tr> <th colspan="4" align="center">Executable Rate of Generated Code (%)</th> </tr> <tr> <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization↑</th><th align="center">General↑</th> </tr> <tr> <td>GPT-4</td><td align="center">91.9</td><td align="center">85.9</td><td align="center">82.8</td> </tr> <tr> <td>GPT-3.5</td><td align="center">89.2</td><td align="center">65.0</td><td align="center">74.1</td> </tr> <tr> <td>LLaMA2-7B-Chat</td> <td align="center">41.9</td> <td align="center">33.1</td> <td align="center">24.1 </td> </tr> <tr> <td>LLaMA2-13B-Chat</td> <td align="center">50.0</td> <td align="center">40.5</td> <td align="center">48.3 </td> </tr> <tr> <td>CodeLLaMA-7B-Instruct</td> <td align="center">85.1</td> <td align="center">54.0</td> <td align="center">70.7 </td> </tr> <tr> <td>CodeLLaMA-13B-Instruct</td> <td align="center">93.2</td> <td align="center">55.8</td> <td align="center">74.1 </td> </tr> <tr> <td>InternLM-7B-Chat-v1.1</td> <td align="center">78.4</td> <td align="center">44.2</td> <td align="center">62.1 </td> </tr> <tr> <td>InternLM-20B-Chat</td> <td align="center">70.3</td> <td align="center">44.2</td> <td align="center">65.5 </td> </tr> <tr> <td>Qwen-7B-Chat</td> <td align="center">82.4</td> <td align="center">64.4</td> <td align="center">67.2 </td> </tr> <tr> <td>Qwen-14B-Chat</td> <td align="center">89.2</td> <td align="center">84.1</td> <td align="center">65.5</td> </tr> </table> <table> <tr> <th colspan="4" align="center">Accuracy of Code Execution Results (%)</th> </tr> <tr> <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization-Hard↑</th><th align="center">Visualization-Easy↑</th> </tr> <tr> <td>GPT-4</td><td align="center">82.8</td><td align="center">66.7</td><td align="center">60.8</td> </tr> <tr> <td>GPT-3.5</td><td align="center">47.3</td><td align="center">33.3</td><td align="center">55.7</td> </tr> <tr> <td>LLaMA2-7B-Chat</td> <td align="center">3.9</td> <td align="center">14.3</td> <td align="center">39.2 </td> </tr> <tr> <td>LLaMA2-13B-Chat</td> <td align="center">8.3</td> <td align="center">8.3</td> <td align="center">40.5 </td> </tr> <tr> <td>CodeLLaMA-7B-Instruct</td> <td align="center">14.3</td> <td align="center">26.2</td> <td align="center">60.8 </td> </tr> <tr> <td>CodeLLaMA-13B-Instruct</td> <td align="center">28.2</td> <td align="center">27.4</td> <td align="center">62.0 </td> </tr> <tr> <td>InternLM-7B-Chat-v1.1</td> <td align="center">28.5</td> <td align="center">4.8</td> <td align="center">40.5 </td> </tr> <tr> <td>InternLM-20B-Chat</td> <td align="center">34.6</td> <td align="center">21.4</td> <td align="center">45.6 </td> </tr> <tr> <td>Qwen-7B-Chat</td> <td align="center">41.9</td> <td align="center">40.5</td> <td align="center">54.4 </td> </tr> <tr> <td>Qwen-14B-Chat</td> <td align="center">58.4</td> <td align="center">53.6</td> <td align="center">59.5</td> </tr> </table> <p align="center"> <br> <img src="assets/code_interpreter_showcase_001.jpg" /> <br> <p> #### Huggingface Agent 千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下: Qwen-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows: <table> <tr> <th colspan="4" align="center">HuggingFace Agent Benchmark- Run Mode</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th> </tr> <tr> <td>GPT-4</td><td align="center">100</td><td align="center">100</td><td align="center">97.4</td> </tr> <tr> <td>GPT-3.5</td><td align="center">95.4</td><td align="center">96.3</td><td align="center">87.0</td> </tr> <tr> <td>StarCoder-Base-15B</td><td align="center">86.1</td><td align="center">87.0</td><td align="center">68.9</td> </tr> <tr> <td>StarCoder-15B</td><td align="center">87.0</td><td align="center">88.0</td><td align="center">68.9</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">87.0</td><td align="center">87.0</td><td align="center">71.5</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">93.5</td><td align="center">94.4</td><td align="center">87.0</td> </tr> </table> <table> <tr> <th colspan="4" align="center">HuggingFace Agent Benchmark - Chat Mode</th> </tr> <tr> <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th> </tr> <tr> <td>GPT-4</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">98.5</td> </tr> <tr> <td>GPT-3.5</td><td align="center">97.3</td><td align="center">96.8</td><td align="center">89.6</td> </tr> <tr> <td>StarCoder-Base-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">91.1</td> </tr> <tr> <td>StarCoder-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">89.6</td> </tr> <tr> <td>Qwen-7B-Chat</td><td align="center">94.7</td><td align="center">94.7</td><td align="center">85.1</td> </tr> <tr> <td>Qwen-14B-Chat</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">95.5</td> </tr> </table> <br> ## FAQ 如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。 If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue. <br> ## 引用 (Citation) 如果你觉得我们的工作对你有帮助,欢迎引用! If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ``` <br> ## 使用协议(License Agreement) 我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/LICENSE)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。 Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/LICENSE) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply. <br> ## 联系我们(Contact Us) 如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件([email protected])联系我们。 If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
aivanni/q-Taxi-v3
aivanni
2023-11-21T11:10:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T11:10:12Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="aivanni/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Rukminikalla/stable_diffusion_models
Rukminikalla
2023-11-21T11:08:15Z
0
1
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-21T10:11:59Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Rukminikalla/stable_diffusion_models This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
salti/AraElectra-base-finetuned-ARCD
salti
2023-11-21T11:06:05Z
14
2
transformers
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "ar", "dataset:arcd", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - ar datasets: - arcd widget: - text: "أين يعيش محمد ؟" context: "اسمي محمد وأنا أعيش في سوريا" - text: "ما العدد الذري للهيدروجين ؟" context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال" - text: "ما خواص الهيدروجين ؟" context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال" ---
aivanni/q-FrozenLake-v1-4x4-noSlippery
aivanni
2023-11-21T11:00:45Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T11:00:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="aivanni/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
s-nlp/mbart-detox-en-ru
s-nlp
2023-11-21T11:00:07Z
13
1
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "ru", "en", "dataset:s-nlp/paradetox", "dataset:s-nlp/ru_paradetox", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-20T10:19:27Z
--- datasets: - s-nlp/paradetox - s-nlp/ru_paradetox language: - ru - en library_name: transformers pipeline_tag: text2text-generation license: openrail++ --- ## Model Description This is the model presented in the paper "Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification". The model is based on [mBART-large-50](https://huggingface.co/facebook/mbart-large-50) and trained on two parallel detoxification corpora: [ParaDetox](https://huggingface.co/datasets/s-nlp/paradetox) and [RuDetox](https://github.com/s-nlp/russe_detox_2022/tree/main/data). More details about this model are in the paper. ## Usage 1. Model loading. ```python from transformers import MBartForConditionalGeneration, AutoTokenizer model = MBartForConditionalGeneration.from_pretrained("s-nlp/mbart-detox-en-ru").cuda() tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50") ``` 2. Detoxification utility. ```python def paraphrase(text, model, tokenizer, n=None, max_length="auto", beams=3): texts = [text] if isinstance(text, str) else text inputs = tokenizer(texts, return_tensors="pt", padding=True)["input_ids"].to( model.device ) if max_length == "auto": max_length = inputs.shape[1] + 10 result = model.generate( inputs, num_return_sequences=n or 1, do_sample=True, temperature=1.0, repetition_penalty=10.0, max_length=max_length, min_length=int(0.5 * max_length), num_beams=beams, forced_bos_token_id=tokenizer.lang_code_to_id[tokenizer.tgt_lang] ) texts = [tokenizer.decode(r, skip_special_tokens=True) for r in result] if not n and isinstance(text, str): return texts[0] return texts ``` ## Citation TBD
salti/wav2vec2-large-xlsr-arabic-common_voice-10_epochs
salti
2023-11-21T10:58:50Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- model-index: - name: wav2vec2-large-xlsr-arabic-common_voice-10_epochs --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-arabic-common_voice-10_epochs This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.3581 - Wer: 0.4555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1701 | 0.9 | 400 | 3.1599 | 1.0 | | 0.8933 | 1.8 | 800 | 0.7198 | 0.7877 | | 0.5849 | 2.7 | 1200 | 0.5046 | 0.6253 | | 0.3858 | 3.6 | 1600 | 0.4247 | 0.5561 | | 0.3083 | 4.49 | 2000 | 0.4026 | 0.5251 | | 0.2556 | 5.39 | 2400 | 0.4010 | 0.5051 | | 0.2221 | 6.29 | 2800 | 0.3765 | 0.4861 | | 0.2026 | 7.19 | 3200 | 0.3652 | 0.4794 | | 0.1996 | 8.09 | 3600 | 0.3627 | 0.4660 | | 0.1755 | 8.99 | 4000 | 0.3582 | 0.4619 | | 0.1697 | 9.89 | 4400 | 0.3581 | 0.4555 | ### Framework versions - Transformers 4.6.0 - Pytorch 1.8.1+cu102 - Datasets 1.6.2 - Tokenizers 0.10.2
Definite/klue-bert-cult-classification
Definite
2023-11-21T10:50:19Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:Definite/klue-bert-mlm-bible", "base_model:finetune:Definite/klue-bert-mlm-bible", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-16T15:38:57Z
--- license: cc-by-sa-4.0 base_model: Definite/klue-bert-mlm-bible tags: - generated_from_trainer metrics: - accuracy model-index: - name: klue-bert-cult-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # klue-bert-cult-classification This model is a fine-tuned version of [Definite/klue-bert-mlm-bible](https://huggingface.co/Definite/klue-bert-mlm-bible) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0816 - Accuracy: 0.988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 313 | 0.1386 | 0.984 | | 0.0203 | 2.0 | 626 | 0.0910 | 0.9853 | | 0.0203 | 3.0 | 939 | 0.0745 | 0.989 | | 0.0031 | 4.0 | 1252 | 0.0900 | 0.9877 | | 0.0004 | 5.0 | 1565 | 0.0823 | 0.988 | | 0.0004 | 6.0 | 1878 | 0.0816 | 0.988 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Skywork/Skywork-13B-Base-Intermediate
Skywork
2023-11-21T10:34:46Z
0
7
null
[ "arxiv:2310.19341", "arxiv:2310.16713", "license:other", "region:us" ]
null
2023-11-20T07:27:13Z
--- license: other license_name: license license_link: >- https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf --- <!-- <div align="center"> <h1> ✨Skywork </h1> </div> --> <div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div> <p align="center"> 👨‍💻 <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a>• 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a> </p> <div align="center"> [🎉天工在线对话平台已正式向公众开放](https://sso.tiangong.cn/?redirect=https://model-platform.tiangong.cn/overview&client_id=200005) </div> <div align="center"> [![GitHub Stars](https://img.shields.io/github/stars/SkyworkAI/Skywork)](https://github.com/SkyworkAI/Skywork/stargazers) [![GitHub Forks](https://img.shields.io/github/forks/SkyworkAI/Skywork)](https://github.com/SkyworkAI/Skywork/fork) </div> # 介绍(Introduction) 本项目提供Skywork-13B-Base第一阶段预训练(约0~3T)过程中产生的中间存档。In this repository we provide intermediate checkpoints of our Skywork-13B-Base during Stage-1 pretraining (0~3T), including: - Skywork-13B-Base-0.5T - Skywork-13B-Base-1T - Skywork-13B-Base-1.5T - Skywork-13B-Base-2T - Skywork-13B-Base-2.5T - Skywork-13B-Base-3T 如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](http://arxiv.org/abs/2310.19341),[Skymath](https://arxiv.org/abs/2310.16713)论文,[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)论文。 If you are interested in more training and evaluation details, please refer to our [technical report](http://arxiv.org/abs/2310.19341), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper. # 声明和协议(Declaration and License Agreement) ## 声明(Declaration) 我们在此声明,不要利用Skywork模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Skywork 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用skywork开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment. We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility. ## 协议(License Agreement) 社区使用Skywork模型需要遵循[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)。Skywork模型支持商业用途,如果您计划将Skywork模型或其衍生品用于商业目的,无需再次申请, 但请您仔细阅读[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)并严格遵守相关条款。 The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). [《Skywork 模型社区许可协议》》]:https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf [[email protected]]: mailto:[email protected] # 引用和联系我们(Contact Us and Citation) 如果您觉得我们的工作对您有帮助,欢迎引用我们的论文~ If you find our work helpful, please feel free to cite our paper~ ``` @misc{wei2023skywork, title={Skywork: A More Open Bilingual Foundation Model}, author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou}, year={2023}, eprint={2310.19341}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @article{skyworkmath, title={SkyMath: Technical Report}, author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu}, journal={arXiv preprint arXiv: 2310.16713}, url={https://arxiv.org/abs/2310.16713}, year={2023} } ``` ``` @article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023, author = {Skywork Multi-Modal Group}, month = sep, title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}}, year = {2023} } ```
latestissue/rwkv-v5-world-v2-1.5b-one-state-slim-16k-ggml-quantized
latestissue
2023-11-21T10:33:03Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2023-11-20T11:08:36Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k
latestissue/rwkv-v5-world-v2-1.5b-one-state-slim-16k-novel-tuned-ggml-quantized
latestissue
2023-11-21T10:32:18Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2023-11-20T11:37:01Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k-novel-tuned
latestissue/rwkv-5-world-ggml
latestissue
2023-11-21T10:31:48Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-08-03T16:05:07Z
--- license: apache-2.0 --- Source: https://huggingface.co/BlinkDL/rwkv-5-world
VoidZeroe/llama4.0-model
VoidZeroe
2023-11-21T10:29:33Z
2
0
peft
[ "peft", "region:us" ]
null
2023-11-14T18:03:36Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
zhijian12345/ppo-Huggy
zhijian12345
2023-11-21T10:25:49Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-21T10:25:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: zhijian12345/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ImNobody/whisper-large-v2-NSC_Korpora_7-100steps
ImNobody
2023-11-21T10:12:48Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v2", "base_model:adapter:openai/whisper-large-v2", "region:us" ]
null
2023-11-21T10:12:38Z
--- library_name: peft base_model: openai/whisper-large-v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
nanduzz/ppo-LunarLander-v2
nanduzz
2023-11-21T09:52:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T09:52:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 280.95 +/- 15.71 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nminnie/zephyr-7b-sft-lora
nminnie
2023-11-21T09:37:44Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T06:06:17Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: zephyr-7b-sft-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-lora This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0876 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1069 | 0.64 | 10 | 1.1042 | | 1.0845 | 1.66 | 21 | 1.0876 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
csukuangfj/sherpa-onnx-conformer-zh-stateless2-2023-05-23
csukuangfj
2023-11-21T09:35:27Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2023-05-23T07:16:52Z
--- license: apache-2.0 --- # Introduction Models from this repo are converted from https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2 which is trained using https://github.com/k2-fsa/icefall/pull/349
samle/sd-webui-models
samle
2023-11-21T09:31:35Z
0
242
null
[ "region:us" ]
null
2023-02-22T10:02:43Z
civitai部分模型搬运 - [taiwan-doll-likeness](https://civitai.com/models/7716/taiwan-doll-likeness) ![taiwan-doll-likeness](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1ca3e089-d67f-4ce2-391d-f89b6994a500/width=200) taiwan-doll-likeness 台湾小姐姐 --- - [korean-doll-likeness](https://civitai.com/models/7448/korean-doll-likeness) ![korean-doll-likeness](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f5f0308b-ea04-478d-9923-a9fee22e3f00/width=200) korean-doll-likeness 韩国小姐姐 --- - [japanese-doll-likeness](https://civitai.com/models/10135/japanese-doll-likeness) japanese-doll-likeness 日本小姐姐
goofyai/torino-aqua-style-goofy-ai
goofyai
2023-11-21T09:31:12Z
7
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "anime", "female", "style", "art style", "woman", "styles", "base_model:goofyai/Goofballmix", "base_model:adapter:goofyai/Goofballmix", "license:other", "region:us" ]
text-to-image
2023-11-21T08:34:29Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - anime - female - style - art style - woman - styles base_model: goofyai/Goofballmix instance_prompt: torino aqua style widget: - text: 'torino aqua style, crescent, bow, hat, striped, mob cap, bouquet, striped dress, pink flower, purple hair, ribbon, very long hair, 1girl, purple eyes, long hair, heart, dress, hair bow, blue flower, long sleeves, book, solo, rose, flower ' output: url: >- 3269163.jpeg - text: 'torino aqua style, contemporary, red shirt, black skirt, plant, medium breasts, red sweater, smile, looking at viewer, red ribbon, hair between eyes, day, ponytail, collarbone, sweater, table, :d, earrings, jewelry, ribbon, blush, skirt, bag, necklace, phone, sleeves past wrists, chair, 1girl, long hair, potted plant, sitting, open mouth, grey hair, long sleeves, solo, black footwear, flower, casual, red eyes, hair ribbon' output: url: >- 3269161.jpeg - text: 'torino aqua style,1girl, demon horns, white long dress, white frills, lace, indoors, window, stand, dark skin, long hair, blonde hair, yellow eyes, smile, sitting, gold, gold edge red couch, hand on chin, town, night' output: url: >- 3269164.jpeg - text: 'torino aqua style, black skirt, school uniform, sailor collar, serafuku, looking at viewer, standing, neckerchief, clock, blue sailor collar, shirt, closed mouth, collarbone, mirror, green eyes, white shirt, pleated skirt, reflection, blush, skirt, blunt bangs, red neckerchief, blue skirt, 1girl, black hair, long hair, long sleeves, solo, indoors' output: url: >- 3269162.jpeg - text: 'torino aqua style, bow, holding, black skirt, two side up, cup, smile, looking at viewer, tree, red jacket, disposable cup, brown pantyhose, sweater, blonde hair, lamppost, coffee cup, parted bangs, bench, :d, jacket, pantyhose, pleated skirt, red bow, blush, skirt, black pantyhose, bag, very long hair, sleeves past wrists, holding cup, 1girl, long hair, sitting, hair bow, open mouth, outdoors, long sleeves, solo, red eyes' output: url: >- 3269166.jpeg - text: 'torino aqua style, contemporary, small breasts, plant, cup, neck ribbon, shoulder bag, looking at viewer, red ribbon, fruit, strawberry, table, earrings, jewelry, ribbon, blush, parfait, purple dress, bag, food, sundae, whipped cream, chair, floral print, 1girl, potted plant, sitting, ice cream, dress, grey hair, short hair, long sleeves, solo, indoors, red eyes, hair ribbon' output: url: >- 3269165.jpeg --- # Torino Aqua Style | Goofy Ai <Gallery /> ## Model description <p>If you like my work then drop a 5 review and hit the heart icon. it's free</p><p>The best way to reach me is by comment and discord</p><h2 id="heading-20"><span style="color:rgb(253, 126, 20)">Character commission is open</span> on <a target="_blank" rel="ugc" href="https://www.buymeacoffee.com/goofy02"><span style="color:rgb(64, 192, 87)">Buymeacoffee</span></a></h2><h3 id="heading-21">Join my New <a target="_blank" rel="ugc" href="https://discord.gg/M8yAsU9ZhC"><strong><u>Discord Server</u></strong></a></h3><p>Use the preview images as a starting point</p><p>Lora weight of 0.7-1 works great</p> ## Trigger words You should use `torino aqua style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/goofyai/torino-aqua-style-goofy-ai/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('goofyai/torino-aqua-style-goofy-ai', weight_name='leonardo_style.safetensors') image = pipeline('torino aqua style, contemporary, small breasts, plant, cup, neck ribbon, shoulder bag, looking at viewer, red ribbon, fruit, strawberry, table, earrings, jewelry, ribbon, blush, parfait, purple dress, bag, food, sundae, whipped cream, chair, floral print, 1girl, potted plant, sitting, ice cream, dress, grey hair, short hair, long sleeves, solo, indoors, red eyes, hair ribbon').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
NeuNav/dqn-SpaceInvadersNoFrameskip-v4
NeuNav
2023-11-21T09:29:16Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-21T08:06:47Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 582.50 +/- 199.69 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NeuNav -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NeuNav -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NeuNav ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
botcon/Somethings
botcon
2023-11-21T09:28:36Z
0
0
null
[ "region:us" ]
null
2023-11-15T08:30:20Z
Main Script: QuestionAnswering.py The script uses HuggingFace library for managing the datasets, importing/exporting models and training the models. There are various variables at the start of the script. - train: Training a new model - PEFT: Whether to use PEFT during training - tf32/fp16: Mixed precision training choice - trained_model: Name of trained model (to be pushed to HF Hub) - train_checkpoint: Checkpoint of training (None by default) - squad_shift: Whether to include extra data (squadshift) - base_tokenizer: Tokenizer of base model - base_model: Pre-trained model - test: Testing a model - tokenizer_list/model_list/question_list: Which tokenizer, model and questions to be tested. CUDA is enabled if applicable. Require user to login into HuggingFace Hub (via command line token or through script) if training. Alternative is to not push to hub, a local repository will be created. Huggingface repositories created (models created) - botcon/XLNET_squad_finetuned_large - botcon/XLNET_squadshift_finetuned_large - botcon/LUKE_squad_finetuned_large - botcon/LUKE_squadshift_finetuned_large - botcon/LUKE_squad_what
TheBloke/Orca-2-7B-GPTQ
TheBloke
2023-11-21T09:26:45Z
38
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orca", "orca2", "microsoft", "arxiv:2311.11045", "base_model:microsoft/Orca-2-7b", "base_model:quantized:microsoft/Orca-2-7b", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-21T09:09:47Z
--- base_model: microsoft/Orca-2-7b inference: false license: other model_creator: Microsoft model_name: Orca 2 7B model_type: llama pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - orca - orca2 - microsoft --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Orca 2 7B - GPTQ - Model creator: [Microsoft](https://huggingface.co/microsoft) - Original model: [Orca 2 7B](https://huggingface.co/microsoft/Orca-2-7b) <!-- description start --> # Description This repo contains GPTQ model files for [Microsoft's Orca 2 7B](https://huggingface.co/microsoft/Orca-2-7b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Orca-2-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Orca-2-7B-GGUF) * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/Orca-2-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Orca-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Orca-2-7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Orca-2-7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Orca-2-7B-GPTQ`: ```shell mkdir Orca-2-7B-GPTQ huggingface-cli download TheBloke/Orca-2-7B-GPTQ --local-dir Orca-2-7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Orca-2-7B-GPTQ huggingface-cli download TheBloke/Orca-2-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Orca-2-7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Orca-2-7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Orca-2-7B-GPTQ --local-dir Orca-2-7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Orca-2-7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Orca-2-7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Orca-2-7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Orca-2-7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Orca-2-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Orca-2-7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Microsoft's Orca 2 7B # Orca 2 <!-- Provide a quick summary of what the model is/does. --> Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning. We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs. ## What is Orca 2’s intended use(s)? + Orca 2 is built for research purposes only. + The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models. ## How was Orca 2 evaluated? + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations. ## Model Details Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf). Please refer to LLaMA-2 technical report for details on the model architecture. ## License Orca 2 is licensed under the [Microsoft Research License](LICENSE). Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. ## Bias, Risks, and Limitations Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the common limitations of other large language models or limitation caused by its training process, including: **Data Biases**: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. **Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. **Lack of Transparency**: Due to the complexity and size, large language models can act as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or decisions. We recommend reviewing transparency notes from Azure for more information. **Content Harms**: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. On an important note, we hope for better regulations and standards from government and technology leaders around content harms for AI technologies in future. We value and acknowledge the important role that research and open source community can play in this direction. **Hallucination**: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models from fabricating content. Moreover, it is not clear whether small models may be more susceptible to hallucination in ungrounded generation use cases due to their smaller sizes and hence reduced memorization capacities. This is an active research topic and we hope there will be more rigorous measurement, understanding and mitigations around this topic. **Potential for Misuse**: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. **Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution of the tuning data. This correlation might limit its accuracy in areas underrepresented in the training dataset such as math, coding, and reasoning. **System messages**: Orca 2 demonstrates variance in performance depending on the system instructions. Additionally, the stochasticity introduced by the model size may lead to generation of non-deterministic responses to different system instructions. **Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings. While the model demonstrate very strong performance in zero-shot settings, it does not show the same gains of using few-shot learning compared to other, specially larger, models. **Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages and shortcomings of the models and methods used for data generation. We posit that Orca 2 benefits from the safety measures incorporated during training and safety guardrails (e.g., content filter) within the Azure OpenAI API. However, detailed studies are required for better quantification of such risks. This model is solely designed for research settings, and its testing has only been carried out in such environments. It should not be used in downstream applications, as additional analysis is needed to assess potential harm or bias in the proposed application. ## Getting started with Orca 2 **Inference with Hugging Face library** ```python import torch import transformers if torch.cuda.is_available(): torch.set_default_device("cuda") else: torch.set_default_device("cpu") model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto') # https://github.com/huggingface/transformers/issues/27132 # please use the slow tokenizer since fast and slow tokenizer produces different tokens tokenizer = transformers.AutoTokenizer.from_pretrained( "microsoft/Orca-2-7b", use_fast=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?" prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') output_ids = model.generate(inputs["input_ids"],) answer = tokenizer.batch_decode(output_ids)[0] print(answer) # This example continues showing how to add a second turn message by the user to the conversation second_turn_user_message = "Give me a list of the key points of your first answer." # we set add_special_tokens=False because we dont want to automatically add a bos_token between messages second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant" second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False) second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1) output_ids_2 = model.generate(second_turn_input,) second_turn_answer = tokenizer.batch_decode(output_ids_2)[0] print(second_turn_answer) ``` **Safe inference with Azure AI Content Safety** The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2, the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and self-harm with multiple severity levels and multi-lingual detection. ```python import os import math import transformers import torch from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.ai.contentsafety.models import AnalyzeTextOptions CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"] CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"] # We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold # For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/ def should_filter_out(input_text, threshold=4): # Create an Content Safety client client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY)) # Construct a request request = AnalyzeTextOptions(text=input_text) # Analyze text try: response = client.analyze_text(request) except HttpResponseError as e: print("Analyze text failed.") if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}") raise print(e) raise categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"] max_score = -math.inf for category in categories: max_score = max(max_score, getattr(response, category).severity) return max_score >= threshold model_path = 'microsoft/Orca-2-7b' device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = transformers.AutoModelForCausalLM.from_pretrained(model_path) model.to(device) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=4096, padding_side="right", use_fast=False, add_special_tokens=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No." prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(device) output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True) sequence_length = inputs["input_ids"].shape[1] new_output_ids = output_ids[:, sequence_length:] answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True) final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]" print(final_output) ``` ## Citation ```bibtex @misc{mitra2023orca, title={Orca 2: Teaching Small Language Models How to Reason}, author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah}, year={2023}, eprint={2311.11045}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
marcchew/TinyLLaMA-1.1B-OrcaPlatty
marcchew
2023-11-21T09:20:53Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:jeff31415/TinyLlama-1.1B-1T-OpenOrca", "base_model:finetune:jeff31415/TinyLlama-1.1B-1T-OpenOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T08:18:40Z
--- license: apache-2.0 base_model: jeff31415/TinyLlama-1.1B-1T-OpenOrca tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [jeff31415/TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-07 - train_batch_size: 20 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1726 | 0.03 | 8 | 2.3170 | | 2.1444 | 0.05 | 16 | 2.2937 | | 2.1036 | 0.08 | 24 | 2.2707 | | 2.0703 | 0.1 | 32 | 2.2478 | | 2.0604 | 0.13 | 40 | 2.2248 | | 2.046 | 0.15 | 48 | 2.2013 | | 1.9919 | 0.18 | 56 | 2.1780 | | 1.9842 | 0.21 | 64 | 2.1547 | | 1.9234 | 0.23 | 72 | 2.1320 | | 1.9235 | 0.26 | 80 | 2.1099 | | 1.9096 | 0.28 | 88 | 2.0884 | | 1.8722 | 0.31 | 96 | 2.0679 | | 1.8594 | 0.34 | 104 | 2.0479 | | 1.8438 | 0.36 | 112 | 2.0283 | | 1.7581 | 0.39 | 120 | 2.0089 | | 1.7852 | 0.41 | 128 | 1.9901 | | 1.7634 | 0.44 | 136 | 1.9714 | | 1.7296 | 0.46 | 144 | 1.9531 | | 1.6976 | 0.49 | 152 | 1.9353 | | 1.6861 | 0.52 | 160 | 1.9173 | | 1.6683 | 0.54 | 168 | 1.8993 | | 1.6255 | 0.57 | 176 | 1.8826 | | 1.619 | 0.59 | 184 | 1.8673 | | 1.6455 | 0.62 | 192 | 1.8534 | | 1.5784 | 0.65 | 200 | 1.8399 | | 1.6078 | 0.67 | 208 | 1.8259 | | 1.5703 | 0.7 | 216 | 1.8124 | | 1.5215 | 0.72 | 224 | 1.7989 | | 1.542 | 0.75 | 232 | 1.7852 | | 1.5147 | 0.77 | 240 | 1.7721 | | 1.5092 | 0.8 | 248 | 1.7589 | | 1.4564 | 0.83 | 256 | 1.7456 | | 1.4985 | 0.85 | 264 | 1.7324 | | 1.4505 | 0.88 | 272 | 1.7189 | | 1.4447 | 0.9 | 280 | 1.7052 | | 1.4436 | 0.93 | 288 | 1.6924 | | 1.4132 | 0.95 | 296 | 1.6799 | | 1.3791 | 0.98 | 304 | 1.6680 | | 1.3877 | 1.01 | 312 | 1.6565 | | 1.3807 | 1.03 | 320 | 1.6453 | | 1.3391 | 1.06 | 328 | 1.6352 | | 1.3232 | 1.08 | 336 | 1.6251 | | 1.3293 | 1.11 | 344 | 1.6159 | | 1.3029 | 1.14 | 352 | 1.6074 | | 1.3173 | 1.16 | 360 | 1.5992 | | 1.3006 | 1.19 | 368 | 1.5926 | | 1.2547 | 1.21 | 376 | 1.5863 | | 1.2704 | 1.24 | 384 | 1.5805 | | 1.2964 | 1.26 | 392 | 1.5749 | | 1.277 | 1.29 | 400 | 1.5695 | | 1.2718 | 1.32 | 408 | 1.5657 | | 1.2379 | 1.34 | 416 | 1.5619 | | 1.2746 | 1.37 | 424 | 1.5585 | | 1.2349 | 1.39 | 432 | 1.5559 | | 1.2264 | 1.42 | 440 | 1.5531 | | 1.2365 | 1.45 | 448 | 1.5505 | | 1.2242 | 1.47 | 456 | 1.5484 | | 1.2094 | 1.5 | 464 | 1.5462 | | 1.2196 | 1.52 | 472 | 1.5444 | | 1.2447 | 1.55 | 480 | 1.5426 | | 1.2127 | 1.57 | 488 | 1.5407 | | 1.2278 | 1.6 | 496 | 1.5391 | | 1.2089 | 1.63 | 504 | 1.5377 | | 1.2069 | 1.65 | 512 | 1.5361 | | 1.2264 | 1.68 | 520 | 1.5350 | | 1.2027 | 1.7 | 528 | 1.5338 | | 1.2138 | 1.73 | 536 | 1.5325 | | 1.207 | 1.75 | 544 | 1.5313 | | 1.2155 | 1.78 | 552 | 1.5304 | | 1.2192 | 1.81 | 560 | 1.5295 | | 1.2223 | 1.83 | 568 | 1.5287 | | 1.2281 | 1.86 | 576 | 1.5278 | | 1.1977 | 1.88 | 584 | 1.5269 | | 1.2101 | 1.91 | 592 | 1.5261 | | 1.2099 | 1.94 | 600 | 1.5254 | | 1.1873 | 1.96 | 608 | 1.5245 | | 1.204 | 1.99 | 616 | 1.5242 | | 1.21 | 2.01 | 624 | 1.5239 | | 1.242 | 2.04 | 632 | 1.5231 | | 1.1696 | 2.06 | 640 | 1.5224 | | 1.1803 | 2.09 | 648 | 1.5218 | | 1.1692 | 2.12 | 656 | 1.5213 | | 1.212 | 2.14 | 664 | 1.5208 | | 1.1977 | 2.17 | 672 | 1.5204 | | 1.187 | 2.19 | 680 | 1.5201 | | 1.1858 | 2.22 | 688 | 1.5199 | | 1.1824 | 2.25 | 696 | 1.5194 | | 1.1914 | 2.27 | 704 | 1.5190 | | 1.1815 | 2.3 | 712 | 1.5187 | | 1.2021 | 2.32 | 720 | 1.5184 | | 1.1872 | 2.35 | 728 | 1.5181 | | 1.1901 | 2.37 | 736 | 1.5178 | | 1.1933 | 2.4 | 744 | 1.5177 | | 1.1773 | 2.43 | 752 | 1.5175 | | 1.1935 | 2.45 | 760 | 1.5172 | | 1.2118 | 2.48 | 768 | 1.5170 | | 1.1816 | 2.5 | 776 | 1.5169 | | 1.1842 | 2.53 | 784 | 1.5167 | | 1.1891 | 2.55 | 792 | 1.5165 | | 1.1883 | 2.58 | 800 | 1.5164 | | 1.1506 | 2.61 | 808 | 1.5163 | | 1.1708 | 2.63 | 816 | 1.5162 | | 1.1944 | 2.66 | 824 | 1.5160 | | 1.1575 | 2.68 | 832 | 1.5159 | | 1.1698 | 2.71 | 840 | 1.5160 | | 1.1525 | 2.74 | 848 | 1.5158 | | 1.1767 | 2.76 | 856 | 1.5157 | | 1.1943 | 2.79 | 864 | 1.5158 | | 1.1727 | 2.81 | 872 | 1.5157 | | 1.195 | 2.84 | 880 | 1.5157 | | 1.1771 | 2.86 | 888 | 1.5157 | | 1.1731 | 2.89 | 896 | 1.5156 | | 1.191 | 2.92 | 904 | 1.5157 | | 1.1903 | 2.94 | 912 | 1.5156 | | 1.1821 | 2.97 | 920 | 1.5156 | | 1.2 | 2.99 | 928 | 1.5156 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.1.dev0 - Tokenizers 0.15.0
alpcansoydas/bert-base-arabic-emotion-analysis-v2.model
alpcansoydas
2023-11-21T09:17:00Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:asafaya/bert-base-arabic", "base_model:finetune:asafaya/bert-base-arabic", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-21T09:05:30Z
--- base_model: asafaya/bert-base-arabic tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-arabic-emotion-analysis-v2.model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-arabic-emotion-analysis-v2.model This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5681 - Accuracy: 0.8025 - F1: 0.8032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.9804 | 1.0 | 102 | 0.6430 | 0.7804 | 0.7823 | | 0.5684 | 2.0 | 204 | 0.5897 | 0.7901 | 0.7899 | | 0.4309 | 3.0 | 306 | 0.5681 | 0.8025 | 0.8032 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Shubhang999/newproject
Shubhang999
2023-11-21T09:09:41Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
2023-11-21T06:54:35Z
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - generated_from_trainer model-index: - name: newproject results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # newproject This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
TheBloke/Orca-2-13B-GPTQ
TheBloke
2023-11-21T09:09:37Z
53
27
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orca", "orca2", "microsoft", "arxiv:2311.11045", "base_model:microsoft/Orca-2-13b", "base_model:quantized:microsoft/Orca-2-13b", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-21T08:30:58Z
--- base_model: microsoft/Orca-2-13b inference: false license: other model_creator: Microsoft model_name: Orca 2 13B model_type: llama pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - orca - orca2 - microsoft --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Orca 2 13B - GPTQ - Model creator: [Microsoft](https://huggingface.co/microsoft) - Original model: [Orca 2 13B](https://huggingface.co/microsoft/Orca-2-13b) <!-- description start --> # Description This repo contains GPTQ model files for [Microsoft's Orca 2 13B](https://huggingface.co/microsoft/Orca-2-13b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Orca-2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Orca-2-13B-GGUF) * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/Orca-2-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 14.55 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [c4](https://huggingface.co/datasets/allenai/c4/viewer/allenai--c4) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Orca-2-13B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Orca-2-13B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Orca-2-13B-GPTQ`: ```shell mkdir Orca-2-13B-GPTQ huggingface-cli download TheBloke/Orca-2-13B-GPTQ --local-dir Orca-2-13B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Orca-2-13B-GPTQ huggingface-cli download TheBloke/Orca-2-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Orca-2-13B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Orca-2-13B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Orca-2-13B-GPTQ --local-dir Orca-2-13B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Orca-2-13B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Orca-2-13B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Orca-2-13B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Orca-2-13B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Orca-2-13B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Orca-2-13B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Microsoft's Orca 2 13B # Orca 2 <!-- Provide a quick summary of what the model is/does. --> Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning. We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs. ## What is Orca 2’s intended use(s)? + Orca 2 is built for research purposes only. + The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models. ## How was Orca 2 evaluated? + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations. ## Model Details Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf). Please refer to LLaMA-2 technical report for details on the model architecture. ## License Orca 2 is licensed under the [Microsoft Research License](LICENSE). Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. ## Bias, Risks, and Limitations Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the common limitations of other large language models or limitation caused by its training process, including: **Data Biases**: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. **Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. **Lack of Transparency**: Due to the complexity and size, large language models can act as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or decisions. We recommend reviewing transparency notes from Azure for more information. **Content Harms**: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. On an important note, we hope for better regulations and standards from government and technology leaders around content harms for AI technologies in future. We value and acknowledge the important role that research and open source community can play in this direction. **Hallucination**: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models from fabricating content. Moreover, it is not clear whether small models may be more susceptible to hallucination in ungrounded generation use cases due to their smaller sizes and hence reduced memorization capacities. This is an active research topic and we hope there will be more rigorous measurement, understanding and mitigations around this topic. **Potential for Misuse**: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. **Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution of the tuning data. This correlation might limit its accuracy in areas underrepresented in the training dataset such as math, coding, and reasoning. **System messages**: Orca 2 demonstrates variance in performance depending on the system instructions. Additionally, the stochasticity introduced by the model size may lead to generation of non-deterministic responses to different system instructions. **Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings. While the model demonstrate very strong performance in zero-shot settings, it does not show the same gains of using few-shot learning compared to other, specially larger, models. **Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages and shortcomings of the models and methods used for data generation. We posit that Orca 2 benefits from the safety measures incorporated during training and safety guardrails (e.g., content filter) within the Azure OpenAI API. However, detailed studies are required for better quantification of such risks. This model is solely designed for research settings, and its testing has only been carried out in such environments. It should not be used in downstream applications, as additional analysis is needed to assess potential harm or bias in the proposed application. ## Getting started with Orca 2 **Inference with Hugging Face library** ```python import torch import transformers if torch.cuda.is_available(): torch.set_default_device("cuda") else: torch.set_default_device("cpu") model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto') # https://github.com/huggingface/transformers/issues/27132 # please use the slow tokenizer since fast and slow tokenizer produces different tokens tokenizer = transformers.AutoTokenizer.from_pretrained( "microsoft/Orca-2-13b", use_fast=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?" prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') output_ids = model.generate(inputs["input_ids"],) answer = tokenizer.batch_decode(output_ids)[0] print(answer) # This example continues showing how to add a second turn message by the user to the conversation second_turn_user_message = "Give me a list of the key points of your first answer." # we set add_special_tokens=False because we dont want to automatically add a bos_token between messages second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant" second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False) second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1) output_ids_2 = model.generate(second_turn_input,) second_turn_answer = tokenizer.batch_decode(output_ids_2)[0] print(second_turn_answer) ``` **Safe inference with Azure AI Content Safety** The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged and can help prevent content harms. Azure AI Content Safety is a content moderation platform that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety, we can moderate the model output by scanning it for sexual content, violence, hate, and self-harm with multiple severity levels and multi-lingual detection. ```python import os import math import transformers import torch from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.ai.contentsafety.models import AnalyzeTextOptions CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"] CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"] # We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold # For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/ def should_filter_out(input_text, threshold=4): # Create an Content Safety client client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY)) # Construct a request request = AnalyzeTextOptions(text=input_text) # Analyze text try: response = client.analyze_text(request) except HttpResponseError as e: print("Analyze text failed.") if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}") raise print(e) raise categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"] max_score = -math.inf for category in categories: max_score = max(max_score, getattr(response, category).severity) return max_score >= threshold model_path = 'microsoft/Orca-2-13b' device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = transformers.AutoModelForCausalLM.from_pretrained(model_path) model.to(device) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=4096, padding_side="right", use_fast=False, add_special_tokens=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No." prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(device) output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True) sequence_length = inputs["input_ids"].shape[1] new_output_ids = output_ids[:, sequence_length:] answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True) final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]" print(final_output) ``` ## Citation ```bibtex @misc{mitra2023orca, title={Orca 2: Teaching Small Language Models How to Reason}, author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah}, year={2023}, eprint={2311.11045}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```