Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct v0.2 version with better improved dolphin based dataset but only 150K for testing instead of the full 850K. Doesn't seem to work that well so I will need to add the rest of the dataset. We are happy for anyone to try it out and give some feedback. Training: - 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine. - Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. https://huggingface.co/datasets/cognitivecomputations/dolphin - Training duration is around 1 day on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights. The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford. Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Axolotl Config: ``` base_model: /home/owen/models/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer train_on_inputs: false group_by_length: false load_in_8bit: false load_in_4bit: true strict: false sequence_len: 4096 bf16: true fp16: false tf32: false flash_attention: true # Data datasets: - path: /home/owen/datasets/cleaned-dolphin201-sharegpt2-uuid-improved.jsonl type: field_instruction: input field_output: output format: "<|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" no_input_format: "<|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" warmup_steps: 10 dataset_prepared_path: ./last_run_prepared # Iterations num_epochs: 1 saves_per_epoch: 4 # Evaluation val_set_size: 0.01 eval_table_size: eval_table_max_new_tokens: eval_sample_packing: false evals_per_epoch: 4 # LoRA output_dir: ./qlora-out adapter: qlora lora_model_dir: lora_r: 64 lora_alpha: 128 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: save_safetensors: true # Sampling sample_packing: true pad_to_sequence_len: true # Batching gradient_accumulation_steps: 32 micro_batch_size: 2 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true # wandb wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb wandb_project: llama-3-8b-instruct-dolphin-q wandb_entity: # A wandb Team name if using a Team wandb_watch: wandb_name: 64-128-4096-1ep-v0.2 wandb_run_id: # Set the ID of your wandb run wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training # Optimizer optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0002 # Misc early_stopping_patience: resume_from_checkpoint: logging_steps: 1 debug: deepspeed: /home/owen/axolotl/deepspeed_configs/zero3_bf16.json weight_decay: 0.1 special_tokens: pad_token: <|end_of_text|> ```
{"license": "llama3"}
AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:45:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5972 - F1 Score: 0.6840 - Accuracy: 0.6872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6453 | 1.04 | 200 | 0.6341 | 0.5788 | 0.6383 | | 0.6199 | 2.08 | 400 | 0.6253 | 0.6551 | 0.6536 | | 0.6124 | 3.12 | 600 | 0.6121 | 0.6474 | 0.6663 | | 0.6068 | 4.17 | 800 | 0.6168 | 0.6541 | 0.6543 | | 0.603 | 5.21 | 1000 | 0.6082 | 0.6700 | 0.6771 | | 0.5996 | 6.25 | 1200 | 0.6052 | 0.6664 | 0.6758 | | 0.5948 | 7.29 | 1400 | 0.6092 | 0.6734 | 0.6738 | | 0.5953 | 8.33 | 1600 | 0.6112 | 0.6708 | 0.6719 | | 0.5924 | 9.38 | 1800 | 0.6172 | 0.6623 | 0.6601 | | 0.5904 | 10.42 | 2000 | 0.6073 | 0.6727 | 0.6745 | | 0.5926 | 11.46 | 2200 | 0.6028 | 0.6673 | 0.6777 | | 0.5848 | 12.5 | 2400 | 0.6064 | 0.6717 | 0.6781 | | 0.5835 | 13.54 | 2600 | 0.6065 | 0.6705 | 0.6751 | | 0.5849 | 14.58 | 2800 | 0.6048 | 0.6680 | 0.6787 | | 0.5878 | 15.62 | 3000 | 0.6038 | 0.6757 | 0.6813 | | 0.5813 | 16.67 | 3200 | 0.6048 | 0.6738 | 0.6771 | | 0.5816 | 17.71 | 3400 | 0.6032 | 0.6741 | 0.6800 | | 0.579 | 18.75 | 3600 | 0.6046 | 0.6740 | 0.6843 | | 0.5801 | 19.79 | 3800 | 0.6059 | 0.6749 | 0.6787 | | 0.5743 | 20.83 | 4000 | 0.6045 | 0.6727 | 0.6764 | | 0.5794 | 21.88 | 4200 | 0.6088 | 0.6678 | 0.6686 | | 0.5773 | 22.92 | 4400 | 0.6064 | 0.6705 | 0.6738 | | 0.5752 | 23.96 | 4600 | 0.6097 | 0.6666 | 0.6660 | | 0.5729 | 25.0 | 4800 | 0.6044 | 0.6750 | 0.6807 | | 0.5728 | 26.04 | 5000 | 0.6071 | 0.6740 | 0.6768 | | 0.5737 | 27.08 | 5200 | 0.6037 | 0.6719 | 0.6748 | | 0.5703 | 28.12 | 5400 | 0.6019 | 0.6711 | 0.6748 | | 0.5693 | 29.17 | 5600 | 0.6054 | 0.6732 | 0.6781 | | 0.5676 | 30.21 | 5800 | 0.6089 | 0.6742 | 0.6781 | | 0.5676 | 31.25 | 6000 | 0.6081 | 0.6739 | 0.6755 | | 0.5676 | 32.29 | 6200 | 0.6024 | 0.6710 | 0.6768 | | 0.5691 | 33.33 | 6400 | 0.6063 | 0.6726 | 0.6751 | | 0.5675 | 34.38 | 6600 | 0.6028 | 0.6770 | 0.6820 | | 0.5651 | 35.42 | 6800 | 0.6033 | 0.6781 | 0.6813 | | 0.5651 | 36.46 | 7000 | 0.6041 | 0.6730 | 0.6777 | | 0.5629 | 37.5 | 7200 | 0.6079 | 0.6762 | 0.6784 | | 0.5621 | 38.54 | 7400 | 0.6077 | 0.6743 | 0.6787 | | 0.5692 | 39.58 | 7600 | 0.6039 | 0.6761 | 0.6820 | | 0.5646 | 40.62 | 7800 | 0.6098 | 0.6734 | 0.6732 | | 0.5648 | 41.67 | 8000 | 0.6045 | 0.6757 | 0.6777 | | 0.562 | 42.71 | 8200 | 0.6058 | 0.6736 | 0.6790 | | 0.5631 | 43.75 | 8400 | 0.6061 | 0.6742 | 0.6771 | | 0.5625 | 44.79 | 8600 | 0.6063 | 0.6716 | 0.6738 | | 0.5593 | 45.83 | 8800 | 0.6061 | 0.6735 | 0.6781 | | 0.5636 | 46.88 | 9000 | 0.6087 | 0.6768 | 0.6777 | | 0.5599 | 47.92 | 9200 | 0.6076 | 0.6779 | 0.6800 | | 0.561 | 48.96 | 9400 | 0.6063 | 0.6760 | 0.6790 | | 0.5598 | 50.0 | 9600 | 0.6083 | 0.6773 | 0.6790 | | 0.5606 | 51.04 | 9800 | 0.6076 | 0.6770 | 0.6790 | | 0.5586 | 52.08 | 10000 | 0.6076 | 0.6773 | 0.6794 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T00:47:34+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA14 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5488 | 0.09 | 10 | 0.1855 | | 0.1628 | 0.18 | 20 | 0.1581 | | 0.1519 | 0.27 | 30 | 0.1629 | | 0.1574 | 0.36 | 40 | 0.1531 | | 0.1517 | 0.45 | 50 | 0.1502 | | 0.1532 | 0.54 | 60 | 0.1478 | | 0.149 | 0.63 | 70 | 0.1477 | | 0.1492 | 0.73 | 80 | 0.1547 | | 0.1464 | 0.82 | 90 | 0.1491 | | 0.149 | 0.91 | 100 | 0.1511 | | 0.1511 | 1.0 | 110 | 0.1485 | | 0.147 | 1.09 | 120 | 0.1478 | | 0.1475 | 1.18 | 130 | 0.1559 | | 0.1484 | 1.27 | 140 | 0.1525 | | 0.1515 | 1.36 | 150 | 0.1506 | | 0.1459 | 1.45 | 160 | 0.1468 | | 0.1455 | 1.54 | 170 | 0.1474 | | 0.1475 | 1.63 | 180 | 0.1475 | | 0.1468 | 1.72 | 190 | 0.1489 | | 0.144 | 1.81 | 200 | 0.1465 | | 0.1444 | 1.9 | 210 | 0.1328 | | 0.105 | 1.99 | 220 | 0.0724 | | 0.1351 | 2.08 | 230 | 0.0626 | | 0.0575 | 2.18 | 240 | 0.0530 | | 0.0396 | 2.27 | 250 | 0.0528 | | 0.0463 | 2.36 | 260 | 0.0343 | | 0.0316 | 2.45 | 270 | 0.0253 | | 0.0186 | 2.54 | 280 | 0.0185 | | 0.0338 | 2.63 | 290 | 0.0150 | | 0.0168 | 2.72 | 300 | 0.0155 | | 0.0149 | 2.81 | 310 | 0.0128 | | 0.0198 | 2.9 | 320 | 0.0127 | | 0.0184 | 2.99 | 330 | 0.0127 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA14", "results": []}]}
Litzy619/O0428HMA14
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T00:48:06+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/xqqvt3r
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:48:06+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA15 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5625 | 0.09 | 10 | 0.2337 | | 0.1706 | 0.18 | 20 | 0.1519 | | 0.1507 | 0.27 | 30 | 0.1728 | | 0.1576 | 0.36 | 40 | 0.1556 | | 0.1494 | 0.45 | 50 | 0.1485 | | 0.1503 | 0.54 | 60 | 0.1484 | | 0.1504 | 0.63 | 70 | 0.1474 | | 0.1504 | 0.73 | 80 | 0.1569 | | 0.1468 | 0.82 | 90 | 0.1503 | | 0.1497 | 0.91 | 100 | 0.1497 | | 0.153 | 1.0 | 110 | 0.1512 | | 0.1464 | 1.09 | 120 | 0.1479 | | 0.1468 | 1.18 | 130 | 0.1521 | | 0.1486 | 1.27 | 140 | 0.1491 | | 0.1517 | 1.36 | 150 | 0.1495 | | 0.1469 | 1.45 | 160 | 0.1515 | | 0.1479 | 1.54 | 170 | 0.1503 | | 0.1491 | 1.63 | 180 | 0.1471 | | 0.148 | 1.72 | 190 | 0.1480 | | 0.1456 | 1.81 | 200 | 0.1488 | | 0.1504 | 1.9 | 210 | 0.1493 | | 0.1468 | 1.99 | 220 | 0.1490 | | 0.1471 | 2.08 | 230 | 0.1460 | | 0.1413 | 2.18 | 240 | 0.1468 | | 0.143 | 2.27 | 250 | 0.1468 | | 0.1435 | 2.36 | 260 | 0.1476 | | 0.1426 | 2.45 | 270 | 0.1461 | | 0.1398 | 2.54 | 280 | 0.1447 | | 0.1402 | 2.63 | 290 | 0.1446 | | 0.1426 | 2.72 | 300 | 0.1424 | | 0.1421 | 2.81 | 310 | 0.1428 | | 0.1415 | 2.9 | 320 | 0.1428 | | 0.1419 | 2.99 | 330 | 0.1427 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA15", "results": []}]}
Litzy619/O0428HMA15
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T00:48:50+00:00
null
transformers
# Uploaded model - **Developed by:** robgonsalves - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
robgonsalves/llama-3-8b-Instruct-lora-64-64
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T00:49:28+00:00
null
null
# andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF This model was converted to GGUF format from [`maywell/Llama-3-Synatra-11B-v1-20k`](https://huggingface.co/maywell/Llama-3-Synatra-11B-v1-20k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/maywell/Llama-3-Synatra-11B-v1-20k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF --model llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF --model llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -n 128 ```
{"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "llama3", "license_link": "LICENSE"}
andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:other", "region:us" ]
null
2024-04-30T00:49:35+00:00
null
null
{}
iambikash/DistilBertForTokenClassification_NLP
null
[ "region:us" ]
null
2024-04-30T00:50:54+00:00
text-to-image
diffusers
{"license": "gpl-3.0"}
circulus/canvers-anime-v3.9
null
[ "diffusers", "safetensors", "license:gpl-3.0", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T00:51:11+00:00
text-to-image
diffusers
{"license": "gpl-3.0"}
circulus/canvers-story-v3.9
null
[ "diffusers", "safetensors", "license:gpl-3.0", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T00:51:54+00:00
text-to-image
diffusers
{"license": "gpl-3.0"}
circulus/canvers-disney-v3.9
null
[ "diffusers", "safetensors", "license:gpl-3.0", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T00:52:10+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_dir This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0625 - Accuracy: 0.9828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "output_dir", "results": []}]}
tralon/test-v3-new
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T00:52:13+00:00
text-to-image
diffusers
{"license": "gpl-3.0"}
circulus/canvers-real-v3.9
null
[ "diffusers", "safetensors", "license:gpl-3.0", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T00:52:21+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-hausa-sentimental This model is a fine-tuned version of [Davlan/bert-base-multilingual-cased-finetuned-hausa](https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-hausa) for sentimental analysis in the Hausa langauge. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results - Accuracy: 0.8372093023255814 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ha"], "tags": ["generated_from_trainer"], "metrics": [{"accuracy": "0.8372093023255814"}], "base_model": "Davlan/bert-base-multilingual-cased-finetuned-hausa", "widget": [{"text": "Rubuta abun"}], "model-index": [{"name": "bert-hausa-sentimental", "results": []}]}
Oloruntobi/bert-hausa-sentimental
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "ha", "base_model:Davlan/bert-base-multilingual-cased-finetuned-hausa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T00:53:40+00:00
text-generation
transformers
Experimental Test Model of Llama-3-8B (base) Finetuned using the ChatML formatting. Only 50% of epoch 1 was done out of two epochs. Extended to 1 Million context using the PoSE technique.
{"license": "apache-2.0"}
tavtav/Pyg-Llama-8B-1M-0.25
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:53:53+00:00
null
null
{}
Litzy619/O0428HMA16
null
[ "region:us" ]
null
2024-04-30T00:53:53+00:00
text-generation
transformers
# Smart-Lemon-Cookie-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) as a base. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: SanjiWatsuki/Silicon-Maid-7B parameters: density: 1.0 weight: 1.0 - model: SanjiWatsuki/Kunoichi-7B parameters: density: 0.4 weight: 1.0 - model: KatyTheCutie/LemonadeRP-4.5.3 parameters: density: 0.6 weight: 1.0 merge_method: ties base_model: MTSAIR/multi_verse_model parameters: normalize: true dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "mistral", "text-generation"], "base_model": ["SanjiWatsuki/Silicon-Maid-7B", "MTSAIR/multi_verse_model", "SanjiWatsuki/Kunoichi-7B", "KatyTheCutie/LemonadeRP-4.5.3"]}
FallenMerick/Smart-Lemon-Cookie-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:MTSAIR/multi_verse_model", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:54:39+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw2.5-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:54:40+00:00
null
null
{}
TravisGibby/tuned
null
[ "region:us" ]
null
2024-04-30T00:54:46+00:00
null
null
{}
robertosanchezsantos/natalie-portman
null
[ "region:us" ]
null
2024-04-30T00:54:50+00:00
text-generation
transformers
## **saiga_llama3_8b** [exllamav2](https://github.com/turboderp/exllamav2) quant for [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b) **Original model information:** # Saiga/Llama3 8B, Russian Llama-3-based chatbot Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf) **ОСТОРОЖНО! WARNING! LET OP!** I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats! **v4**: LLama-3 prompt format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|> Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|> Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` **v2, v3**: ChatML prompt format: ``` <|im_start|>system Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|> <|im_start|>user Как дела?<|im_end|> <|im_start|>assistant Отлично, а у тебя?<|im_end|> <|im_start|>user Шикарно. Как пройти в библиотеку?<|im_end|> <|im_start|>assistant ``` Examples: ``` User: Почему трава зеленая? Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры. Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза. Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза. ``` ``` User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи. В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью. Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе. Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю. Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону. Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения. ``` v4: - [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd) - dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d - wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt) v3: - [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3) - dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b - wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal) v2: - [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b) - dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a - wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk) - 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4 - Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py) # Evaluation * Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl * Framework: https://github.com/tatsu-lab/alpaca_eval * Evaluator: alpaca_eval_cot_gpt4_turbo_fn | model | length_controlled_winrate | win_rate | standard_error | avg_length | |-----|-----|-----|-----|-----| |chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 | |chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 | |saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 | |saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 | |saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 | |saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
{"language": ["ru"], "license": "other", "tags": ["llama3", "8-bit"], "base_model": ["IlyaGusev/saiga_llama3_8b"], "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"}
Slvcxc/saiga_llama3_8b-V4-8.0bpw-h8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "8-bit", "conversational", "ru", "base_model:IlyaGusev/saiga_llama3_8b", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:56:09+00:00
text-generation
transformers
{}
DavidAhn/llama-3-Open-Ko-8B-instruct-slerp-alpha
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T00:56:10+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
Redliz/csci-566
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-30T00:58:14+00:00
null
null
{}
Litzy619/O0428HMA17
null
[ "region:us" ]
null
2024-04-30T00:58:41+00:00
null
null
{}
Litzy619/O0428HMA18
null
[ "region:us" ]
null
2024-04-30T00:58:42+00:00
text-generation
null
# andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`maywell/Llama-3-Ko-8B-Instruct`](https://huggingface.co/maywell/Llama-3-Ko-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/maywell/Llama-3-Ko-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF --model llama-3-ko-8b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF --model llama-3-ko-8b-instruct.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-ko-8b-instruct.Q4_K_M.gguf -n 128 ```
{"language": ["en", "ko"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"}
andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko", "llama-cpp", "gguf-my-repo", "text-generation", "en", "ko", "license:other", "region:us" ]
null
2024-04-30T00:59:10+00:00
null
null
{}
Greenfrogtime/aster
null
[ "region:us" ]
null
2024-04-30T01:00:33+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA8 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7563 | 0.09 | 10 | 0.1800 | | 0.1866 | 0.18 | 20 | 0.1584 | | 0.1527 | 0.27 | 30 | 0.1644 | | 0.1553 | 0.36 | 40 | 0.1529 | | 0.1503 | 0.45 | 50 | 0.1481 | | 0.15 | 0.54 | 60 | 0.1472 | | 0.1486 | 0.63 | 70 | 0.1470 | | 0.1491 | 0.73 | 80 | 0.1499 | | 0.1464 | 0.82 | 90 | 0.1489 | | 0.1483 | 0.91 | 100 | 0.1503 | | 0.1502 | 1.0 | 110 | 0.1495 | | 0.1424 | 1.09 | 120 | 0.1640 | | 0.1267 | 1.18 | 130 | 0.1005 | | 0.131 | 1.27 | 140 | 0.1975 | | 0.1214 | 1.36 | 150 | 0.0764 | | 0.0707 | 1.45 | 160 | 0.0716 | | 0.0616 | 1.54 | 170 | 0.0617 | | 0.0663 | 1.63 | 180 | 0.0601 | | 0.7161 | 1.72 | 190 | 4.6913 | | 0.5982 | 1.81 | 200 | 0.0666 | | 0.0729 | 1.9 | 210 | 0.0584 | | 0.0632 | 1.99 | 220 | 0.0565 | | 0.0603 | 2.08 | 230 | 0.0594 | | 0.0543 | 2.18 | 240 | 0.0587 | | 0.0564 | 2.27 | 250 | 0.0580 | | 0.0604 | 2.36 | 260 | 0.0579 | | 0.0544 | 2.45 | 270 | 0.0565 | | 0.0526 | 2.54 | 280 | 0.0556 | | 0.0548 | 2.63 | 290 | 0.0574 | | 0.0568 | 2.72 | 300 | 0.0552 | | 0.0568 | 2.81 | 310 | 0.0551 | | 0.0578 | 2.9 | 320 | 0.0553 | | 0.0604 | 2.99 | 330 | 0.0553 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA8", "results": []}]}
Litzy619/O0428HMA8
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:01:52+00:00
null
null
{}
isFei/text
null
[ "region:us" ]
null
2024-04-30T01:02:33+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA19 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3653 | 0.09 | 10 | 0.1662 | | 0.161 | 0.18 | 20 | 0.1547 | | 0.1465 | 0.27 | 30 | 0.1611 | | 0.1549 | 0.36 | 40 | 0.1513 | | 0.1531 | 0.45 | 50 | 0.1534 | | 0.1543 | 0.54 | 60 | 0.1653 | | 0.1577 | 0.63 | 70 | 0.1531 | | 0.1543 | 0.73 | 80 | 0.1564 | | 0.1478 | 0.82 | 90 | 0.1518 | | 0.1492 | 0.91 | 100 | 0.1490 | | 0.1513 | 1.0 | 110 | 0.1487 | | 0.1461 | 1.09 | 120 | 0.1484 | | 0.1476 | 1.18 | 130 | 0.1532 | | 0.1479 | 1.27 | 140 | 0.1507 | | 0.1494 | 1.36 | 150 | 0.1488 | | 0.1453 | 1.45 | 160 | 0.1479 | | 0.1461 | 1.54 | 170 | 0.1481 | | 0.1482 | 1.63 | 180 | 0.1464 | | 0.1481 | 1.72 | 190 | 0.1528 | | 0.147 | 1.81 | 200 | 0.1477 | | 0.1484 | 1.9 | 210 | 0.1474 | | 0.1458 | 1.99 | 220 | 0.1489 | | 0.1462 | 2.08 | 230 | 0.1475 | | 0.1408 | 2.18 | 240 | 0.1453 | | 0.1426 | 2.27 | 250 | 0.1462 | | 0.1441 | 2.36 | 260 | 0.1470 | | 0.1419 | 2.45 | 270 | 0.1451 | | 0.1396 | 2.54 | 280 | 0.1450 | | 0.1404 | 2.63 | 290 | 0.1449 | | 0.1433 | 2.72 | 300 | 0.1434 | | 0.1419 | 2.81 | 310 | 0.1432 | | 0.1414 | 2.9 | 320 | 0.1433 | | 0.1431 | 2.99 | 330 | 0.1434 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA19", "results": []}]}
Litzy619/O0428HMA19
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:02:43+00:00
null
transformers
# andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF This model was converted to GGUF format from [`lemon-mint/gemma-ko-1.1-2b-it-2`](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it-2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it-2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF --model gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF --model gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -n 128 ```
{"license": "gemma", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["beomi/gemma-ko-2b", "google/gemma-1.1-2b-it", "google/gemma-2b"]}
andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:beomi/gemma-ko-2b", "base_model:google/gemma-1.1-2b-it", "base_model:google/gemma-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:04:48+00:00
null
null
Finetune [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on [nielsr/docvqa_1200_examples dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples)
{}
Thanhstar/Idefics2-8b-multimodal
null
[ "safetensors", "region:us" ]
null
2024-04-30T01:05:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5991 - F1 Score: 0.6846 - Accuracy: 0.6888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6391 | 1.04 | 200 | 0.6195 | 0.6206 | 0.6562 | | 0.6074 | 2.08 | 400 | 0.6297 | 0.6565 | 0.6540 | | 0.5975 | 3.12 | 600 | 0.5994 | 0.6742 | 0.6820 | | 0.5937 | 4.17 | 800 | 0.6089 | 0.6670 | 0.6673 | | 0.5889 | 5.21 | 1000 | 0.6060 | 0.6689 | 0.6781 | | 0.5855 | 6.25 | 1200 | 0.5963 | 0.6736 | 0.6820 | | 0.5777 | 7.29 | 1400 | 0.5950 | 0.6811 | 0.6849 | | 0.5782 | 8.33 | 1600 | 0.6053 | 0.6769 | 0.6777 | | 0.5733 | 9.38 | 1800 | 0.6000 | 0.6743 | 0.6745 | | 0.5691 | 10.42 | 2000 | 0.5975 | 0.6730 | 0.6843 | | 0.5691 | 11.46 | 2200 | 0.5956 | 0.6726 | 0.6859 | | 0.5583 | 12.5 | 2400 | 0.6013 | 0.6764 | 0.6846 | | 0.5588 | 13.54 | 2600 | 0.6029 | 0.6834 | 0.6852 | | 0.5587 | 14.58 | 2800 | 0.6056 | 0.6715 | 0.6856 | | 0.5565 | 15.62 | 3000 | 0.5997 | 0.6864 | 0.6908 | | 0.5473 | 16.67 | 3200 | 0.6002 | 0.6847 | 0.6911 | | 0.5471 | 17.71 | 3400 | 0.6040 | 0.6811 | 0.6820 | | 0.5459 | 18.75 | 3600 | 0.6023 | 0.6906 | 0.6947 | | 0.544 | 19.79 | 3800 | 0.6026 | 0.6819 | 0.6830 | | 0.5368 | 20.83 | 4000 | 0.5968 | 0.6882 | 0.6937 | | 0.5393 | 21.88 | 4200 | 0.6100 | 0.6830 | 0.6836 | | 0.5357 | 22.92 | 4400 | 0.6072 | 0.6861 | 0.6882 | | 0.5333 | 23.96 | 4600 | 0.6071 | 0.6864 | 0.6859 | | 0.5287 | 25.0 | 4800 | 0.6097 | 0.6803 | 0.6810 | | 0.5251 | 26.04 | 5000 | 0.6108 | 0.6854 | 0.6878 | | 0.5247 | 27.08 | 5200 | 0.6076 | 0.6877 | 0.6885 | | 0.5213 | 28.12 | 5400 | 0.6099 | 0.6815 | 0.6826 | | 0.5185 | 29.17 | 5600 | 0.6080 | 0.6862 | 0.6895 | | 0.5158 | 30.21 | 5800 | 0.6185 | 0.6785 | 0.6810 | | 0.5142 | 31.25 | 6000 | 0.6114 | 0.6845 | 0.6872 | | 0.5144 | 32.29 | 6200 | 0.6159 | 0.6741 | 0.6755 | | 0.5152 | 33.33 | 6400 | 0.6179 | 0.6809 | 0.6804 | | 0.5086 | 34.38 | 6600 | 0.6245 | 0.6812 | 0.6875 | | 0.5058 | 35.42 | 6800 | 0.6241 | 0.6732 | 0.6732 | | 0.5064 | 36.46 | 7000 | 0.6216 | 0.6814 | 0.6813 | | 0.5041 | 37.5 | 7200 | 0.6184 | 0.6840 | 0.6856 | | 0.4978 | 38.54 | 7400 | 0.6274 | 0.6766 | 0.6794 | | 0.5034 | 39.58 | 7600 | 0.6228 | 0.6807 | 0.6865 | | 0.5008 | 40.62 | 7800 | 0.6338 | 0.6775 | 0.6764 | | 0.4983 | 41.67 | 8000 | 0.6237 | 0.6793 | 0.6790 | | 0.4917 | 42.71 | 8200 | 0.6340 | 0.6759 | 0.6771 | | 0.4972 | 43.75 | 8400 | 0.6290 | 0.6775 | 0.6768 | | 0.4955 | 44.79 | 8600 | 0.6269 | 0.6775 | 0.6777 | | 0.4902 | 45.83 | 8800 | 0.6266 | 0.6778 | 0.6794 | | 0.4928 | 46.88 | 9000 | 0.6318 | 0.6731 | 0.6722 | | 0.4917 | 47.92 | 9200 | 0.6288 | 0.6773 | 0.6771 | | 0.4911 | 48.96 | 9400 | 0.6282 | 0.6727 | 0.6738 | | 0.4903 | 50.0 | 9600 | 0.6311 | 0.6742 | 0.6742 | | 0.4888 | 51.04 | 9800 | 0.6316 | 0.6721 | 0.6722 | | 0.4845 | 52.08 | 10000 | 0.6314 | 0.6734 | 0.6735 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:08:30+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "region:us" ]
null
2024-04-30T01:09:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5947 - F1 Score: 0.6824 - Accuracy: 0.6891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.636 | 1.04 | 200 | 0.6087 | 0.6525 | 0.6683 | | 0.6027 | 2.08 | 400 | 0.6371 | 0.6472 | 0.6448 | | 0.5907 | 3.12 | 600 | 0.6005 | 0.6783 | 0.6839 | | 0.5864 | 4.17 | 800 | 0.5999 | 0.6793 | 0.6807 | | 0.5755 | 5.21 | 1000 | 0.6054 | 0.6726 | 0.6768 | | 0.5691 | 6.25 | 1200 | 0.5921 | 0.6819 | 0.6856 | | 0.5599 | 7.29 | 1400 | 0.5950 | 0.6850 | 0.6914 | | 0.5552 | 8.33 | 1600 | 0.6075 | 0.6805 | 0.6817 | | 0.5444 | 9.38 | 1800 | 0.6082 | 0.6826 | 0.6813 | | 0.5373 | 10.42 | 2000 | 0.6086 | 0.6776 | 0.6820 | | 0.5321 | 11.46 | 2200 | 0.6017 | 0.6780 | 0.6856 | | 0.5172 | 12.5 | 2400 | 0.6228 | 0.6803 | 0.6852 | | 0.508 | 13.54 | 2600 | 0.6226 | 0.6767 | 0.6771 | | 0.504 | 14.58 | 2800 | 0.6280 | 0.6654 | 0.6686 | | 0.4956 | 15.62 | 3000 | 0.6334 | 0.6742 | 0.6742 | | 0.4791 | 16.67 | 3200 | 0.6394 | 0.6778 | 0.6794 | | 0.4715 | 17.71 | 3400 | 0.6500 | 0.6612 | 0.6592 | | 0.4647 | 18.75 | 3600 | 0.6655 | 0.6715 | 0.6699 | | 0.4541 | 19.79 | 3800 | 0.6991 | 0.6647 | 0.6628 | | 0.4432 | 20.83 | 4000 | 0.6577 | 0.6676 | 0.6667 | | 0.4384 | 21.88 | 4200 | 0.7055 | 0.6646 | 0.6631 | | 0.4261 | 22.92 | 4400 | 0.7187 | 0.6459 | 0.6432 | | 0.4209 | 23.96 | 4600 | 0.6900 | 0.6736 | 0.6735 | | 0.4069 | 25.0 | 4800 | 0.7107 | 0.6618 | 0.6601 | | 0.3971 | 26.04 | 5000 | 0.7382 | 0.6633 | 0.6618 | | 0.3943 | 27.08 | 5200 | 0.7328 | 0.6578 | 0.6556 | | 0.3864 | 28.12 | 5400 | 0.7531 | 0.6634 | 0.6611 | | 0.3762 | 29.17 | 5600 | 0.7479 | 0.6719 | 0.6722 | | 0.3672 | 30.21 | 5800 | 0.7751 | 0.6542 | 0.6520 | | 0.36 | 31.25 | 6000 | 0.7859 | 0.6605 | 0.6588 | | 0.3614 | 32.29 | 6200 | 0.7895 | 0.6550 | 0.6527 | | 0.3536 | 33.33 | 6400 | 0.7837 | 0.6685 | 0.6676 | | 0.3483 | 34.38 | 6600 | 0.7955 | 0.6657 | 0.6667 | | 0.3391 | 35.42 | 6800 | 0.8129 | 0.6653 | 0.6641 | | 0.3407 | 36.46 | 7000 | 0.7978 | 0.6617 | 0.6595 | | 0.3335 | 37.5 | 7200 | 0.8079 | 0.6648 | 0.6644 | | 0.3227 | 38.54 | 7400 | 0.8304 | 0.6615 | 0.6615 | | 0.3291 | 39.58 | 7600 | 0.8175 | 0.6639 | 0.6647 | | 0.321 | 40.62 | 7800 | 0.8559 | 0.6608 | 0.6585 | | 0.3141 | 41.67 | 8000 | 0.8459 | 0.6617 | 0.6605 | | 0.3091 | 42.71 | 8200 | 0.8625 | 0.6639 | 0.6637 | | 0.3056 | 43.75 | 8400 | 0.8581 | 0.6616 | 0.6598 | | 0.3027 | 44.79 | 8600 | 0.8863 | 0.6657 | 0.6641 | | 0.2968 | 45.83 | 8800 | 0.8766 | 0.6633 | 0.6611 | | 0.2979 | 46.88 | 9000 | 0.8802 | 0.6583 | 0.6559 | | 0.2954 | 47.92 | 9200 | 0.8858 | 0.6617 | 0.6598 | | 0.2919 | 48.96 | 9400 | 0.8817 | 0.6678 | 0.6667 | | 0.2894 | 50.0 | 9600 | 0.8913 | 0.6630 | 0.6611 | | 0.2894 | 51.04 | 9800 | 0.8927 | 0.6666 | 0.6650 | | 0.2844 | 52.08 | 10000 | 0.8937 | 0.6663 | 0.6647 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:09:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4756 - F1 Score: 0.7781 - Accuracy: 0.7776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5856 | 1.15 | 200 | 0.5516 | 0.7251 | 0.7272 | | 0.5437 | 2.3 | 400 | 0.5871 | 0.6926 | 0.6977 | | 0.5193 | 3.45 | 600 | 0.5526 | 0.7251 | 0.7262 | | 0.5138 | 4.6 | 800 | 0.5540 | 0.7213 | 0.7229 | | 0.5066 | 5.75 | 1000 | 0.5250 | 0.7486 | 0.7481 | | 0.4989 | 6.9 | 1200 | 0.5327 | 0.7470 | 0.7467 | | 0.4933 | 8.05 | 1400 | 0.5393 | 0.7398 | 0.7398 | | 0.4915 | 9.2 | 1600 | 0.5634 | 0.7221 | 0.7258 | | 0.4827 | 10.34 | 1800 | 0.5314 | 0.7425 | 0.7424 | | 0.4864 | 11.49 | 2000 | 0.5129 | 0.7586 | 0.7582 | | 0.476 | 12.64 | 2200 | 0.5526 | 0.7235 | 0.7265 | | 0.4789 | 13.79 | 2400 | 0.5209 | 0.7424 | 0.7427 | | 0.4746 | 14.94 | 2600 | 0.5149 | 0.7450 | 0.7452 | | 0.4726 | 16.09 | 2800 | 0.5087 | 0.7532 | 0.7531 | | 0.4701 | 17.24 | 3000 | 0.5291 | 0.7429 | 0.7434 | | 0.4653 | 18.39 | 3200 | 0.5184 | 0.7429 | 0.7438 | | 0.4663 | 19.54 | 3400 | 0.5100 | 0.7472 | 0.7481 | | 0.463 | 20.69 | 3600 | 0.5015 | 0.7633 | 0.7629 | | 0.4636 | 21.84 | 3800 | 0.5214 | 0.7373 | 0.7391 | | 0.46 | 22.99 | 4000 | 0.5220 | 0.7375 | 0.7395 | | 0.4624 | 24.14 | 4200 | 0.4973 | 0.7628 | 0.7625 | | 0.4533 | 25.29 | 4400 | 0.5217 | 0.7512 | 0.7517 | | 0.461 | 26.44 | 4600 | 0.5081 | 0.7574 | 0.7575 | | 0.4552 | 27.59 | 4800 | 0.5101 | 0.7526 | 0.7531 | | 0.4525 | 28.74 | 5000 | 0.5097 | 0.7493 | 0.7503 | | 0.4569 | 29.89 | 5200 | 0.5063 | 0.7617 | 0.7618 | | 0.4494 | 31.03 | 5400 | 0.5174 | 0.7449 | 0.7463 | | 0.4531 | 32.18 | 5600 | 0.4900 | 0.7669 | 0.7665 | | 0.4438 | 33.33 | 5800 | 0.5002 | 0.7671 | 0.7668 | | 0.4539 | 34.48 | 6000 | 0.5053 | 0.7548 | 0.7553 | | 0.4429 | 35.63 | 6200 | 0.4950 | 0.7679 | 0.7675 | | 0.4503 | 36.78 | 6400 | 0.4991 | 0.7635 | 0.7636 | | 0.4449 | 37.93 | 6600 | 0.5143 | 0.7543 | 0.7549 | | 0.4454 | 39.08 | 6800 | 0.4985 | 0.7660 | 0.7661 | | 0.4446 | 40.23 | 7000 | 0.5068 | 0.7601 | 0.7607 | | 0.4443 | 41.38 | 7200 | 0.5043 | 0.7607 | 0.7611 | | 0.4445 | 42.53 | 7400 | 0.5047 | 0.7618 | 0.7621 | | 0.4415 | 43.68 | 7600 | 0.5023 | 0.7626 | 0.7629 | | 0.4388 | 44.83 | 7800 | 0.5066 | 0.7587 | 0.7593 | | 0.4428 | 45.98 | 8000 | 0.4992 | 0.7662 | 0.7661 | | 0.4446 | 47.13 | 8200 | 0.5115 | 0.7582 | 0.7589 | | 0.4398 | 48.28 | 8400 | 0.5004 | 0.7646 | 0.7647 | | 0.4361 | 49.43 | 8600 | 0.5021 | 0.7613 | 0.7614 | | 0.4399 | 50.57 | 8800 | 0.5032 | 0.7650 | 0.7650 | | 0.4384 | 51.72 | 9000 | 0.5080 | 0.7614 | 0.7618 | | 0.4373 | 52.87 | 9200 | 0.5105 | 0.7562 | 0.7567 | | 0.44 | 54.02 | 9400 | 0.5097 | 0.7608 | 0.7614 | | 0.4384 | 55.17 | 9600 | 0.5087 | 0.7598 | 0.7603 | | 0.4352 | 56.32 | 9800 | 0.5021 | 0.7627 | 0.7629 | | 0.4381 | 57.47 | 10000 | 0.5048 | 0.7622 | 0.7625 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:09:56+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
patelmiteshn/Llama3-3-8B-Instruct-ft-loraAdap
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:10:37+00:00
null
null
# Experiment27pasticheNeuralsynthesis-7B Experiment27pasticheNeuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: automerger/Experiment27Pastiche-7B - model: Kukedlc/NeuralSynthesis-7B-v0.1 merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment27pasticheNeuralsynthesis-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/Experiment27pasticheNeuralsynthesis-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:11:08+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9137 - Accuracy: 0.7434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 293 | 0.4585 | 0.7766 | | 0.5555 | 2.0 | 586 | 0.5768 | 0.7200 | | 0.5555 | 3.0 | 879 | 0.6414 | 0.7065 | | 0.3357 | 4.0 | 1172 | 0.7783 | 0.7328 | | 0.3357 | 5.0 | 1465 | 0.9137 | 0.7434 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2", "results": []}]}
DenysZakharkevych/albert-base-v2
null
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:11:19+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/dz5jzuu
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:11:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # style-mixed-llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "style-mixed-llama3", "results": []}]}
KevinLiuR/style-mixed-llama3
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-30T01:13:12+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/nobc630
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:14:02+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["llama", "ORPO"], "base_model": "Danielbrdz/Barcenas-Llama3-8b-ORPO", "license_link": "LICENSE", "license_name": "llama3", "quantized_by": "mradermacher"}
mradermacher/Barcenas-Llama3-8b-ORPO-GGUF
null
[ "transformers", "gguf", "llama", "ORPO", "en", "base_model:Danielbrdz/Barcenas-Llama3-8b-ORPO", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:14:03+00:00
text-generation
transformers
# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF This model was converted to GGUF format from [`lemon-mint/gemma-2b-translation-v0.150`](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-2b-translation-v0.150.Q4_K_M.gguf -n 128 ```
{"language": ["ko"], "license": "gemma", "library_name": "transformers", "tags": ["gemma", "pytorch", "instruct", "finetune", "translation", "llama-cpp", "gguf-my-repo"], "base_model": "lemon-mint/gemma-ko-1.1-2b-it", "widget": [{"messages": [{"role": "user", "content": "Translate into Korean:Hamsters don't eat cats."}]}], "pipeline_tag": "text-generation"}
andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF
null
[ "transformers", "gguf", "gemma", "pytorch", "instruct", "finetune", "translation", "llama-cpp", "gguf-my-repo", "text-generation", "ko", "base_model:lemon-mint/gemma-ko-1.1-2b-it", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:14:11+00:00
text2text-generation
transformers
{"language": ["en"], "license": "gpl-2.0", "datasets": ["Peyton3995/dolly-15k-mistral-pirate"], "metrics": ["rouge"]}
JakeSurge/flan-t5-large-pirate-v0.2
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "en", "dataset:Peyton3995/dolly-15k-mistral-pirate", "license:gpl-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:14:33+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4826 - F1 Score: 0.7893 - Accuracy: 0.7888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5716 | 1.15 | 200 | 0.5526 | 0.7232 | 0.7229 | | 0.5198 | 2.3 | 400 | 0.5708 | 0.6990 | 0.7039 | | 0.4906 | 3.45 | 600 | 0.5257 | 0.7421 | 0.7424 | | 0.4834 | 4.6 | 800 | 0.5103 | 0.7440 | 0.7442 | | 0.4792 | 5.75 | 1000 | 0.5061 | 0.7574 | 0.7571 | | 0.4697 | 6.9 | 1200 | 0.5028 | 0.7583 | 0.7578 | | 0.4663 | 8.05 | 1400 | 0.5187 | 0.7451 | 0.7452 | | 0.4617 | 9.2 | 1600 | 0.5189 | 0.7366 | 0.7384 | | 0.4539 | 10.34 | 1800 | 0.5051 | 0.7600 | 0.7596 | | 0.4513 | 11.49 | 2000 | 0.5022 | 0.7568 | 0.7567 | | 0.4441 | 12.64 | 2200 | 0.5134 | 0.7474 | 0.7485 | | 0.4441 | 13.79 | 2400 | 0.5256 | 0.7420 | 0.7442 | | 0.4386 | 14.94 | 2600 | 0.4957 | 0.7596 | 0.7596 | | 0.4343 | 16.09 | 2800 | 0.5198 | 0.7446 | 0.7463 | | 0.4309 | 17.24 | 3000 | 0.5055 | 0.7608 | 0.7607 | | 0.4261 | 18.39 | 3200 | 0.5004 | 0.7610 | 0.7607 | | 0.427 | 19.54 | 3400 | 0.4949 | 0.7589 | 0.7589 | | 0.4197 | 20.69 | 3600 | 0.4976 | 0.7673 | 0.7668 | | 0.4211 | 21.84 | 3800 | 0.5279 | 0.7488 | 0.7503 | | 0.4137 | 22.99 | 4000 | 0.5355 | 0.7462 | 0.7478 | | 0.4159 | 24.14 | 4200 | 0.4833 | 0.7741 | 0.7737 | | 0.4065 | 25.29 | 4400 | 0.5006 | 0.7661 | 0.7657 | | 0.4073 | 26.44 | 4600 | 0.5198 | 0.7591 | 0.7593 | | 0.4071 | 27.59 | 4800 | 0.5177 | 0.7584 | 0.7589 | | 0.3981 | 28.74 | 5000 | 0.5070 | 0.7573 | 0.7575 | | 0.4038 | 29.89 | 5200 | 0.5085 | 0.7685 | 0.7683 | | 0.3935 | 31.03 | 5400 | 0.5313 | 0.7532 | 0.7542 | | 0.3959 | 32.18 | 5600 | 0.5124 | 0.7676 | 0.7675 | | 0.387 | 33.33 | 5800 | 0.5151 | 0.7710 | 0.7708 | | 0.3946 | 34.48 | 6000 | 0.5046 | 0.7737 | 0.7733 | | 0.3824 | 35.63 | 6200 | 0.5079 | 0.7748 | 0.7744 | | 0.3887 | 36.78 | 6400 | 0.5168 | 0.7655 | 0.7654 | | 0.3817 | 37.93 | 6600 | 0.5358 | 0.7587 | 0.7593 | | 0.3819 | 39.08 | 6800 | 0.5097 | 0.7685 | 0.7683 | | 0.3795 | 40.23 | 7000 | 0.5268 | 0.7590 | 0.7593 | | 0.377 | 41.38 | 7200 | 0.5260 | 0.7626 | 0.7625 | | 0.3792 | 42.53 | 7400 | 0.5261 | 0.7598 | 0.7600 | | 0.376 | 43.68 | 7600 | 0.5163 | 0.7693 | 0.7690 | | 0.3694 | 44.83 | 7800 | 0.5214 | 0.7647 | 0.7647 | | 0.3722 | 45.98 | 8000 | 0.5140 | 0.7697 | 0.7693 | | 0.3719 | 47.13 | 8200 | 0.5319 | 0.7581 | 0.7582 | | 0.3696 | 48.28 | 8400 | 0.5281 | 0.7608 | 0.7607 | | 0.3648 | 49.43 | 8600 | 0.5329 | 0.7561 | 0.7560 | | 0.3661 | 50.57 | 8800 | 0.5336 | 0.7633 | 0.7632 | | 0.3686 | 51.72 | 9000 | 0.5273 | 0.7692 | 0.7690 | | 0.3636 | 52.87 | 9200 | 0.5321 | 0.7598 | 0.7596 | | 0.3651 | 54.02 | 9400 | 0.5381 | 0.7581 | 0.7582 | | 0.366 | 55.17 | 9600 | 0.5369 | 0.7596 | 0.7596 | | 0.3648 | 56.32 | 9800 | 0.5287 | 0.7678 | 0.7675 | | 0.3621 | 57.47 | 10000 | 0.5303 | 0.7641 | 0.7639 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:15:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5010 - F1 Score: 0.7846 - Accuracy: 0.7841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5565 | 1.15 | 200 | 0.5465 | 0.7333 | 0.7334 | | 0.5011 | 2.3 | 400 | 0.5553 | 0.6999 | 0.7060 | | 0.4765 | 3.45 | 600 | 0.5192 | 0.7452 | 0.7460 | | 0.4689 | 4.6 | 800 | 0.5017 | 0.7538 | 0.7542 | | 0.4619 | 5.75 | 1000 | 0.5046 | 0.7607 | 0.7611 | | 0.4479 | 6.9 | 1200 | 0.4935 | 0.7728 | 0.7726 | | 0.4407 | 8.05 | 1400 | 0.4994 | 0.7679 | 0.7675 | | 0.4289 | 9.2 | 1600 | 0.5391 | 0.7429 | 0.7449 | | 0.4197 | 10.34 | 1800 | 0.5165 | 0.7561 | 0.7567 | | 0.413 | 11.49 | 2000 | 0.4956 | 0.7697 | 0.7693 | | 0.4003 | 12.64 | 2200 | 0.4967 | 0.7658 | 0.7661 | | 0.3972 | 13.79 | 2400 | 0.5274 | 0.7491 | 0.7510 | | 0.3863 | 14.94 | 2600 | 0.4881 | 0.7713 | 0.7708 | | 0.3783 | 16.09 | 2800 | 0.5760 | 0.7378 | 0.7413 | | 0.3673 | 17.24 | 3000 | 0.5253 | 0.7624 | 0.7629 | | 0.3608 | 18.39 | 3200 | 0.5385 | 0.7592 | 0.7593 | | 0.3588 | 19.54 | 3400 | 0.5170 | 0.7635 | 0.7632 | | 0.3431 | 20.69 | 3600 | 0.5149 | 0.7730 | 0.7726 | | 0.3393 | 21.84 | 3800 | 0.5352 | 0.7704 | 0.7701 | | 0.3278 | 22.99 | 4000 | 0.5680 | 0.7584 | 0.7589 | | 0.3275 | 24.14 | 4200 | 0.5353 | 0.7673 | 0.7668 | | 0.3126 | 25.29 | 4400 | 0.5789 | 0.7625 | 0.7625 | | 0.3121 | 26.44 | 4600 | 0.5664 | 0.7674 | 0.7672 | | 0.302 | 27.59 | 4800 | 0.5861 | 0.7533 | 0.7539 | | 0.2934 | 28.74 | 5000 | 0.5784 | 0.7569 | 0.7567 | | 0.2937 | 29.89 | 5200 | 0.5977 | 0.7534 | 0.7531 | | 0.2812 | 31.03 | 5400 | 0.5971 | 0.7575 | 0.7575 | | 0.2787 | 32.18 | 5600 | 0.6287 | 0.7487 | 0.7492 | | 0.2675 | 33.33 | 5800 | 0.6269 | 0.7643 | 0.7639 | | 0.2674 | 34.48 | 6000 | 0.6238 | 0.7590 | 0.7585 | | 0.2552 | 35.63 | 6200 | 0.6466 | 0.7610 | 0.7611 | | 0.2587 | 36.78 | 6400 | 0.6403 | 0.7590 | 0.7589 | | 0.2477 | 37.93 | 6600 | 0.6421 | 0.7539 | 0.7542 | | 0.2405 | 39.08 | 6800 | 0.6798 | 0.7376 | 0.7380 | | 0.2391 | 40.23 | 7000 | 0.6509 | 0.7511 | 0.7513 | | 0.2355 | 41.38 | 7200 | 0.6706 | 0.7572 | 0.7571 | | 0.2281 | 42.53 | 7400 | 0.7032 | 0.7441 | 0.7449 | | 0.2321 | 43.68 | 7600 | 0.6918 | 0.7460 | 0.7463 | | 0.2237 | 44.83 | 7800 | 0.7034 | 0.7502 | 0.7499 | | 0.2214 | 45.98 | 8000 | 0.6958 | 0.7582 | 0.7578 | | 0.2179 | 47.13 | 8200 | 0.7049 | 0.7534 | 0.7531 | | 0.2125 | 48.28 | 8400 | 0.7326 | 0.7488 | 0.7488 | | 0.2101 | 49.43 | 8600 | 0.7270 | 0.7541 | 0.7539 | | 0.2086 | 50.57 | 8800 | 0.7434 | 0.7493 | 0.7492 | | 0.2076 | 51.72 | 9000 | 0.7319 | 0.7508 | 0.7506 | | 0.2024 | 52.87 | 9200 | 0.7368 | 0.7509 | 0.7506 | | 0.2052 | 54.02 | 9400 | 0.7500 | 0.7498 | 0.7496 | | 0.2042 | 55.17 | 9600 | 0.7443 | 0.7500 | 0.7499 | | 0.2046 | 56.32 | 9800 | 0.7369 | 0.7530 | 0.7528 | | 0.2003 | 57.47 | 10000 | 0.7377 | 0.7545 | 0.7542 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:16:19+00:00
null
null
{"license": "apache-2.0"}
YFR718/loras
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T01:17:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5803 - F1 Score: 0.6958 - Accuracy: 0.6962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6495 | 0.87 | 200 | 0.6251 | 0.6533 | 0.6546 | | 0.6245 | 1.74 | 400 | 0.6136 | 0.6606 | 0.6603 | | 0.6133 | 2.61 | 600 | 0.6028 | 0.6698 | 0.6696 | | 0.6036 | 3.48 | 800 | 0.5988 | 0.6740 | 0.6739 | | 0.5992 | 4.35 | 1000 | 0.5990 | 0.6716 | 0.6717 | | 0.5932 | 5.22 | 1200 | 0.5979 | 0.6684 | 0.6704 | | 0.5904 | 6.09 | 1400 | 0.6170 | 0.6531 | 0.6598 | | 0.5855 | 6.96 | 1600 | 0.5982 | 0.6715 | 0.6728 | | 0.5823 | 7.83 | 1800 | 0.5914 | 0.6750 | 0.6747 | | 0.5822 | 8.7 | 2000 | 0.5944 | 0.6728 | 0.6731 | | 0.5776 | 9.57 | 2200 | 0.5857 | 0.6815 | 0.6813 | | 0.5782 | 10.43 | 2400 | 0.5919 | 0.6794 | 0.6807 | | 0.5738 | 11.3 | 2600 | 0.5848 | 0.6793 | 0.6807 | | 0.5775 | 12.17 | 2800 | 0.5838 | 0.6824 | 0.6826 | | 0.574 | 13.04 | 3000 | 0.5863 | 0.6777 | 0.6780 | | 0.5706 | 13.91 | 3200 | 0.5819 | 0.6848 | 0.6851 | | 0.5682 | 14.78 | 3400 | 0.5903 | 0.6730 | 0.6753 | | 0.5686 | 15.65 | 3600 | 0.5853 | 0.6833 | 0.6842 | | 0.5688 | 16.52 | 3800 | 0.5854 | 0.6798 | 0.6802 | | 0.565 | 17.39 | 4000 | 0.5885 | 0.6834 | 0.6842 | | 0.5676 | 18.26 | 4200 | 0.5839 | 0.6875 | 0.6880 | | 0.5633 | 19.13 | 4400 | 0.5891 | 0.6838 | 0.6837 | | 0.5633 | 20.0 | 4600 | 0.5894 | 0.6824 | 0.6837 | | 0.5635 | 20.87 | 4800 | 0.5853 | 0.6881 | 0.6886 | | 0.5612 | 21.74 | 5000 | 0.5876 | 0.6830 | 0.6840 | | 0.5616 | 22.61 | 5200 | 0.5826 | 0.6879 | 0.6883 | | 0.5609 | 23.48 | 5400 | 0.5954 | 0.6762 | 0.6802 | | 0.5588 | 24.35 | 5600 | 0.5846 | 0.6876 | 0.6883 | | 0.5608 | 25.22 | 5800 | 0.5918 | 0.6831 | 0.6861 | | 0.555 | 26.09 | 6000 | 0.5926 | 0.6805 | 0.6829 | | 0.5598 | 26.96 | 6200 | 0.5937 | 0.6812 | 0.6845 | | 0.5559 | 27.83 | 6400 | 0.5982 | 0.6811 | 0.6853 | | 0.5572 | 28.7 | 6600 | 0.5832 | 0.6869 | 0.6875 | | 0.5538 | 29.57 | 6800 | 0.5808 | 0.6892 | 0.6899 | | 0.5524 | 30.43 | 7000 | 0.5905 | 0.6841 | 0.6867 | | 0.5589 | 31.3 | 7200 | 0.5872 | 0.6862 | 0.6883 | | 0.5546 | 32.17 | 7400 | 0.5859 | 0.6849 | 0.6867 | | 0.554 | 33.04 | 7600 | 0.5824 | 0.6875 | 0.6883 | | 0.553 | 33.91 | 7800 | 0.5832 | 0.6861 | 0.6872 | | 0.5554 | 34.78 | 8000 | 0.5845 | 0.6885 | 0.6897 | | 0.5508 | 35.65 | 8200 | 0.5826 | 0.6879 | 0.6889 | | 0.552 | 36.52 | 8400 | 0.5838 | 0.6890 | 0.6902 | | 0.5521 | 37.39 | 8600 | 0.5829 | 0.6895 | 0.6902 | | 0.5482 | 38.26 | 8800 | 0.5892 | 0.6860 | 0.6880 | | 0.5518 | 39.13 | 9000 | 0.5868 | 0.6884 | 0.6902 | | 0.5496 | 40.0 | 9200 | 0.5825 | 0.6890 | 0.6897 | | 0.5477 | 40.87 | 9400 | 0.5829 | 0.6902 | 0.6908 | | 0.5498 | 41.74 | 9600 | 0.5841 | 0.6865 | 0.6875 | | 0.5556 | 42.61 | 9800 | 0.5824 | 0.6879 | 0.6889 | | 0.5468 | 43.48 | 10000 | 0.5833 | 0.6873 | 0.6883 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:17:22+00:00
text-classification
transformers
{}
anonyiv/elastic_class
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:17:40+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/databricks/dbrx-instruct (actually the f16 from https://huggingface.co/dranger003/dbrx-instruct-iMat.GGUF as llama.cpp seems to have broken dbrx support currently) <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/dbrx-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q2_K.gguf) | Q2_K | 48.0 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_XS.gguf.part2of2) | IQ3_XS | 53.9 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_S.gguf.part2of2) | IQ3_S | 56.9 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_S.gguf.part2of2) | Q3_K_S | 56.9 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_M.gguf.part2of2) | IQ3_M | 58.1 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_M.gguf.part2of2) | Q3_K_M | 63.2 | lower quality | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_L.gguf.part2of2) | Q3_K_L | 68.5 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ4_XS.gguf.part2of2) | IQ4_XS | 71.0 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_S.gguf.part2of2) | Q4_K_S | 75.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_M.gguf.part2of2) | Q4_K_M | 80.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 90.7 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 93.7 | | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part3of3) | Q6_K | 108.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part3of3) | Q8_0 | 139.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "databricks/dbrx-instruct", "quantized_by": "mradermacher"}
mradermacher/dbrx-instruct-GGUF
null
[ "transformers", "gguf", "en", "base_model:databricks/dbrx-instruct", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:19:02+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA20 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3593 | 0.09 | 10 | 0.1698 | | 0.1623 | 0.18 | 20 | 0.1542 | | 0.1492 | 0.27 | 30 | 0.1619 | | 0.156 | 0.36 | 40 | 0.1530 | | 0.1529 | 0.45 | 50 | 0.1495 | | 0.1518 | 0.54 | 60 | 0.1483 | | 0.1518 | 0.63 | 70 | 0.1469 | | 0.1508 | 0.73 | 80 | 0.1569 | | 0.1497 | 0.82 | 90 | 0.1489 | | 0.1478 | 0.91 | 100 | 0.1490 | | 0.1511 | 1.0 | 110 | 0.1499 | | 0.1467 | 1.09 | 120 | 0.1471 | | 0.1462 | 1.18 | 130 | 0.1528 | | 0.1483 | 1.27 | 140 | 0.1490 | | 0.1493 | 1.36 | 150 | 0.1513 | | 0.146 | 1.45 | 160 | 0.1485 | | 0.1463 | 1.54 | 170 | 0.1478 | | 0.1484 | 1.63 | 180 | 0.1456 | | 0.1469 | 1.72 | 190 | 0.1502 | | 0.1456 | 1.81 | 200 | 0.1482 | | 0.1494 | 1.9 | 210 | 0.1474 | | 0.1457 | 1.99 | 220 | 0.1485 | | 0.1449 | 2.08 | 230 | 0.1455 | | 0.1381 | 2.18 | 240 | 0.1442 | | 0.1399 | 2.27 | 250 | 0.1440 | | 0.1412 | 2.36 | 260 | 0.1475 | | 0.1391 | 2.45 | 270 | 0.1420 | | 0.1351 | 2.54 | 280 | 0.1410 | | 0.1331 | 2.63 | 290 | 0.1386 | | 0.1349 | 2.72 | 300 | 0.1354 | | 0.1317 | 2.81 | 310 | 0.1350 | | 0.1301 | 2.9 | 320 | 0.1353 | | 0.1327 | 2.99 | 330 | 0.1352 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA20", "results": []}]}
Litzy619/O0428HMA20
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:19:14+00:00
null
transformers
# Uploaded model - **Developed by:** Kairaz - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Kairaz/games
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:20:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 18 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "trainer", "results": []}]}
Surabhi-K/trainer
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-04-30T01:20:54+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetune This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7793 - Accuracy: 0.7993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7582 | 1.0 | 2299 | 0.5703 | 0.7783 | | 0.381 | 2.0 | 4598 | 0.5787 | 0.7950 | | 0.1529 | 3.0 | 6897 | 0.7793 | 0.7993 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetune", "results": []}]}
avikumar/bert-base-uncased-finetune
null
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:21:16+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA9 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6202 | 0.09 | 10 | 0.2442 | | 0.1807 | 0.18 | 20 | 0.1525 | | 0.1486 | 0.27 | 30 | 0.1701 | | 0.1564 | 0.36 | 40 | 0.1538 | | 0.1507 | 0.45 | 50 | 0.1492 | | 0.1511 | 0.54 | 60 | 0.1474 | | 0.1491 | 0.63 | 70 | 0.1472 | | 0.1496 | 0.73 | 80 | 0.1551 | | 0.1466 | 0.82 | 90 | 0.1500 | | 0.1496 | 0.91 | 100 | 0.1495 | | 0.1516 | 1.0 | 110 | 0.1463 | | 0.1509 | 1.09 | 120 | 0.1321 | | 0.3642 | 1.18 | 130 | 0.2426 | | 0.179 | 1.27 | 140 | 0.1081 | | 0.1519 | 1.36 | 150 | 0.1300 | | 0.272 | 1.45 | 160 | 0.0911 | | 0.0746 | 1.54 | 170 | 0.0694 | | 0.0657 | 1.63 | 180 | 0.0619 | | 0.0678 | 1.72 | 190 | 0.0584 | | 0.0578 | 1.81 | 200 | 0.0592 | | 0.0577 | 1.9 | 210 | 0.0612 | | 0.0599 | 1.99 | 220 | 0.0554 | | 0.0587 | 2.08 | 230 | 0.0568 | | 0.0538 | 2.18 | 240 | 0.0564 | | 0.0562 | 2.27 | 250 | 0.0581 | | 0.0591 | 2.36 | 260 | 0.0568 | | 0.0537 | 2.45 | 270 | 0.0551 | | 0.0523 | 2.54 | 280 | 0.0557 | | 0.0548 | 2.63 | 290 | 0.0566 | | 0.056 | 2.72 | 300 | 0.0545 | | 0.0569 | 2.81 | 310 | 0.0543 | | 0.0584 | 2.9 | 320 | 0.0545 | | 0.0604 | 2.99 | 330 | 0.0545 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA9", "results": []}]}
Litzy619/O0428HMA9
null
[ "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:21:17+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "microsoft/Phi-3-mini-4k-instruct"}
Surabhi-K/phi3_18epochs
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3-mini-4k-instruct", "region:us" ]
null
2024-04-30T01:21:40+00:00
null
null
{"license": "openrail"}
fukkkufdyj/glkfhfdlrfy
null
[ "license:openrail", "region:us" ]
null
2024-04-30T01:23:11+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3.5-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:23:51+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-rating-poem This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the poem_sentiment dataset. It achieves the following results on the evaluation set: - Loss: 1.1902 - Accuracy: 0.8762 - F1: 0.8765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0599 | 0.45 | 50 | 1.0247 | 0.8571 | 0.8611 | | 0.1257 | 0.89 | 100 | 1.1237 | 0.8571 | 0.8500 | | 0.032 | 1.34 | 150 | 1.1346 | 0.8667 | 0.8567 | | 0.0012 | 1.79 | 200 | 1.2181 | 0.8381 | 0.8373 | | 0.0954 | 2.23 | 250 | 1.0423 | 0.8762 | 0.8667 | | 0.0323 | 2.68 | 300 | 1.0560 | 0.8667 | 0.8715 | | 0.0128 | 3.12 | 350 | 1.1156 | 0.8857 | 0.8809 | | 0.0269 | 3.57 | 400 | 1.1702 | 0.8762 | 0.8681 | | 0.0172 | 4.02 | 450 | 1.1968 | 0.8667 | 0.8678 | | 0.0004 | 4.46 | 500 | 1.1906 | 0.8762 | 0.8765 | | 0.0117 | 4.91 | 550 | 1.1902 | 0.8762 | 0.8765 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.2.2 - Datasets 2.12.0 - Tokenizers 0.13.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["poem_sentiment"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-rating-poem", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "poem_sentiment", "type": "poem_sentiment", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8761904761904762, "name": "Accuracy"}, {"type": "f1", "value": 0.8765098002671388, "name": "F1"}]}]}]}
VuaCoBac/distilbert-base-uncased-finetuned-rating-poem
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:poem_sentiment", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:25:03+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jski/UltraMerge-v2-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:25:22+00:00
null
null
{}
Medha0219Baniya/cybersecurity-ner
null
[ "region:us" ]
null
2024-04-30T01:26:20+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
clarkchan/llama3-8b-alpaca-cn
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T01:27:01+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5969 - F1 Score: 0.6975 - Accuracy: 0.6976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6432 | 0.87 | 200 | 0.6124 | 0.6610 | 0.6609 | | 0.6088 | 1.74 | 400 | 0.6067 | 0.6690 | 0.6707 | | 0.5947 | 2.61 | 600 | 0.5922 | 0.6828 | 0.6829 | | 0.5882 | 3.48 | 800 | 0.5893 | 0.6794 | 0.6793 | | 0.5822 | 4.35 | 1000 | 0.5875 | 0.6769 | 0.6766 | | 0.5758 | 5.22 | 1200 | 0.5858 | 0.6832 | 0.6848 | | 0.5728 | 6.09 | 1400 | 0.6043 | 0.6695 | 0.6742 | | 0.5666 | 6.96 | 1600 | 0.5931 | 0.6826 | 0.6840 | | 0.5612 | 7.83 | 1800 | 0.5899 | 0.6813 | 0.6810 | | 0.5593 | 8.7 | 2000 | 0.5884 | 0.6871 | 0.6875 | | 0.5557 | 9.57 | 2200 | 0.5817 | 0.6863 | 0.6864 | | 0.5536 | 10.43 | 2400 | 0.5959 | 0.6865 | 0.6891 | | 0.5501 | 11.3 | 2600 | 0.5791 | 0.6954 | 0.6970 | | 0.5528 | 12.17 | 2800 | 0.5763 | 0.6920 | 0.6924 | | 0.5447 | 13.04 | 3000 | 0.5880 | 0.6907 | 0.6929 | | 0.5401 | 13.91 | 3200 | 0.5858 | 0.6926 | 0.6946 | | 0.5375 | 14.78 | 3400 | 0.5954 | 0.6903 | 0.6937 | | 0.5371 | 15.65 | 3600 | 0.5845 | 0.6852 | 0.6883 | | 0.5352 | 16.52 | 3800 | 0.5785 | 0.6947 | 0.6948 | | 0.5285 | 17.39 | 4000 | 0.6022 | 0.6984 | 0.7003 | | 0.5315 | 18.26 | 4200 | 0.5866 | 0.6940 | 0.6959 | | 0.5242 | 19.13 | 4400 | 0.5850 | 0.6995 | 0.6995 | | 0.5238 | 20.0 | 4600 | 0.5912 | 0.6982 | 0.7008 | | 0.5193 | 20.87 | 4800 | 0.5875 | 0.6972 | 0.6976 | | 0.5196 | 21.74 | 5000 | 0.5850 | 0.6949 | 0.6951 | | 0.5183 | 22.61 | 5200 | 0.5878 | 0.6933 | 0.6948 | | 0.5173 | 23.48 | 5400 | 0.5961 | 0.6909 | 0.6943 | | 0.5097 | 24.35 | 5600 | 0.5933 | 0.6947 | 0.6965 | | 0.5118 | 25.22 | 5800 | 0.5924 | 0.6993 | 0.7 | | 0.5061 | 26.09 | 6000 | 0.6060 | 0.6951 | 0.6970 | | 0.5106 | 26.96 | 6200 | 0.5891 | 0.6928 | 0.6957 | | 0.5045 | 27.83 | 6400 | 0.6064 | 0.6856 | 0.6889 | | 0.5042 | 28.7 | 6600 | 0.5888 | 0.6982 | 0.6981 | | 0.5017 | 29.57 | 6800 | 0.5842 | 0.6985 | 0.6989 | | 0.4993 | 30.43 | 7000 | 0.5908 | 0.6971 | 0.6984 | | 0.5033 | 31.3 | 7200 | 0.5922 | 0.7005 | 0.7011 | | 0.5005 | 32.17 | 7400 | 0.5878 | 0.6983 | 0.6986 | | 0.4961 | 33.04 | 7600 | 0.5890 | 0.7012 | 0.7014 | | 0.4948 | 33.91 | 7800 | 0.5893 | 0.6981 | 0.6989 | | 0.4955 | 34.78 | 8000 | 0.5919 | 0.7009 | 0.7014 | | 0.4931 | 35.65 | 8200 | 0.5915 | 0.7000 | 0.7 | | 0.4898 | 36.52 | 8400 | 0.5890 | 0.6999 | 0.7 | | 0.4875 | 37.39 | 8600 | 0.5926 | 0.6985 | 0.6984 | | 0.4874 | 38.26 | 8800 | 0.5965 | 0.7008 | 0.7014 | | 0.4915 | 39.13 | 9000 | 0.5920 | 0.7020 | 0.7022 | | 0.486 | 40.0 | 9200 | 0.5944 | 0.6986 | 0.6984 | | 0.4873 | 40.87 | 9400 | 0.5935 | 0.7029 | 0.7030 | | 0.4862 | 41.74 | 9600 | 0.5929 | 0.7023 | 0.7024 | | 0.4929 | 42.61 | 9800 | 0.5914 | 0.7015 | 0.7016 | | 0.4828 | 43.48 | 10000 | 0.5937 | 0.7034 | 0.7035 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:27:22+00:00
null
null
{}
RohitBh/Sentiment_Analysis
null
[ "region:us" ]
null
2024-04-30T01:27:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7298 - F1 Score: 0.7016 - Accuracy: 0.7014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6389 | 0.87 | 200 | 0.6063 | 0.6685 | 0.6682 | | 0.6006 | 1.74 | 400 | 0.6004 | 0.6810 | 0.6823 | | 0.5855 | 2.61 | 600 | 0.5854 | 0.6826 | 0.6834 | | 0.5775 | 3.48 | 800 | 0.5817 | 0.6907 | 0.6905 | | 0.5686 | 4.35 | 1000 | 0.5836 | 0.6871 | 0.6870 | | 0.5597 | 5.22 | 1200 | 0.5855 | 0.6867 | 0.6878 | | 0.555 | 6.09 | 1400 | 0.5977 | 0.6832 | 0.6856 | | 0.5465 | 6.96 | 1600 | 0.5786 | 0.7001 | 0.7 | | 0.5366 | 7.83 | 1800 | 0.5892 | 0.6940 | 0.6937 | | 0.5307 | 8.7 | 2000 | 0.5852 | 0.6975 | 0.6973 | | 0.5226 | 9.57 | 2200 | 0.5890 | 0.6929 | 0.6940 | | 0.5192 | 10.43 | 2400 | 0.6053 | 0.6946 | 0.6962 | | 0.5129 | 11.3 | 2600 | 0.5802 | 0.6979 | 0.6984 | | 0.5075 | 12.17 | 2800 | 0.6029 | 0.6850 | 0.6856 | | 0.4983 | 13.04 | 3000 | 0.5983 | 0.6980 | 0.6989 | | 0.4894 | 13.91 | 3200 | 0.5995 | 0.6991 | 0.6992 | | 0.4812 | 14.78 | 3400 | 0.6421 | 0.6874 | 0.6889 | | 0.4747 | 15.65 | 3600 | 0.6179 | 0.6899 | 0.6929 | | 0.4691 | 16.52 | 3800 | 0.6068 | 0.6935 | 0.6943 | | 0.4593 | 17.39 | 4000 | 0.6400 | 0.6920 | 0.6924 | | 0.458 | 18.26 | 4200 | 0.6236 | 0.6997 | 0.7014 | | 0.4482 | 19.13 | 4400 | 0.6311 | 0.6921 | 0.6921 | | 0.4433 | 20.0 | 4600 | 0.6343 | 0.6947 | 0.6951 | | 0.4326 | 20.87 | 4800 | 0.6531 | 0.6964 | 0.6965 | | 0.4294 | 21.74 | 5000 | 0.6335 | 0.6938 | 0.6937 | | 0.425 | 22.61 | 5200 | 0.6397 | 0.6950 | 0.6954 | | 0.4206 | 23.48 | 5400 | 0.6499 | 0.6965 | 0.6970 | | 0.4128 | 24.35 | 5600 | 0.6704 | 0.7029 | 0.7038 | | 0.4089 | 25.22 | 5800 | 0.6735 | 0.6975 | 0.6973 | | 0.4042 | 26.09 | 6000 | 0.6734 | 0.7021 | 0.7027 | | 0.4003 | 26.96 | 6200 | 0.6617 | 0.6964 | 0.6976 | | 0.3907 | 27.83 | 6400 | 0.6731 | 0.6968 | 0.6976 | | 0.3843 | 28.7 | 6600 | 0.6912 | 0.6900 | 0.6899 | | 0.3804 | 29.57 | 6800 | 0.6820 | 0.6957 | 0.6954 | | 0.3831 | 30.43 | 7000 | 0.6843 | 0.6929 | 0.6927 | | 0.3766 | 31.3 | 7200 | 0.6948 | 0.7019 | 0.7019 | | 0.3749 | 32.17 | 7400 | 0.6839 | 0.6965 | 0.6965 | | 0.3661 | 33.04 | 7600 | 0.6864 | 0.6994 | 0.6997 | | 0.3648 | 33.91 | 7800 | 0.6997 | 0.6982 | 0.6984 | | 0.3635 | 34.78 | 8000 | 0.7016 | 0.6964 | 0.6962 | | 0.3593 | 35.65 | 8200 | 0.7018 | 0.6965 | 0.6962 | | 0.3513 | 36.52 | 8400 | 0.7165 | 0.6962 | 0.6959 | | 0.3509 | 37.39 | 8600 | 0.7196 | 0.7045 | 0.7043 | | 0.3461 | 38.26 | 8800 | 0.7234 | 0.7018 | 0.7016 | | 0.349 | 39.13 | 9000 | 0.7181 | 0.6974 | 0.6973 | | 0.3445 | 40.0 | 9200 | 0.7203 | 0.6981 | 0.6978 | | 0.3464 | 40.87 | 9400 | 0.7161 | 0.6948 | 0.6946 | | 0.3407 | 41.74 | 9600 | 0.7187 | 0.6967 | 0.6965 | | 0.343 | 42.61 | 9800 | 0.7229 | 0.6978 | 0.6976 | | 0.3365 | 43.48 | 10000 | 0.7276 | 0.6959 | 0.6957 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:27:48+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2576 - F1 Score: 0.9083 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3259 | 2.17 | 200 | 0.2859 | 0.8885 | 0.8884 | | 0.2744 | 4.35 | 400 | 0.2952 | 0.8874 | 0.8871 | | 0.2658 | 6.52 | 600 | 0.2808 | 0.8907 | 0.8905 | | 0.2644 | 8.7 | 800 | 0.2921 | 0.8922 | 0.8919 | | 0.2556 | 10.87 | 1000 | 0.2708 | 0.8967 | 0.8966 | | 0.253 | 13.04 | 1200 | 0.2768 | 0.8969 | 0.8966 | | 0.2482 | 15.22 | 1400 | 0.2714 | 0.8913 | 0.8912 | | 0.2444 | 17.39 | 1600 | 0.2728 | 0.8976 | 0.8973 | | 0.2407 | 19.57 | 1800 | 0.2639 | 0.8932 | 0.8932 | | 0.2397 | 21.74 | 2000 | 0.2797 | 0.8928 | 0.8925 | | 0.2345 | 23.91 | 2200 | 0.2662 | 0.8975 | 0.8973 | | 0.2327 | 26.09 | 2400 | 0.2734 | 0.8921 | 0.8919 | | 0.2288 | 28.26 | 2600 | 0.2632 | 0.8953 | 0.8953 | | 0.2254 | 30.43 | 2800 | 0.2632 | 0.8913 | 0.8912 | | 0.2224 | 32.61 | 3000 | 0.2648 | 0.8945 | 0.8946 | | 0.2193 | 34.78 | 3200 | 0.2640 | 0.8960 | 0.8960 | | 0.2171 | 36.96 | 3400 | 0.2628 | 0.8960 | 0.8960 | | 0.2162 | 39.13 | 3600 | 0.2616 | 0.8933 | 0.8932 | | 0.2111 | 41.3 | 3800 | 0.2631 | 0.8993 | 0.8994 | | 0.2072 | 43.48 | 4000 | 0.2666 | 0.8918 | 0.8919 | | 0.2155 | 45.65 | 4200 | 0.2627 | 0.8972 | 0.8973 | | 0.2039 | 47.83 | 4400 | 0.2622 | 0.8958 | 0.8960 | | 0.2046 | 50.0 | 4600 | 0.2662 | 0.8936 | 0.8939 | | 0.201 | 52.17 | 4800 | 0.2643 | 0.8978 | 0.8980 | | 0.2031 | 54.35 | 5000 | 0.2653 | 0.8986 | 0.8987 | | 0.1967 | 56.52 | 5200 | 0.2676 | 0.8974 | 0.8973 | | 0.1968 | 58.7 | 5400 | 0.2658 | 0.8952 | 0.8953 | | 0.1924 | 60.87 | 5600 | 0.2702 | 0.8972 | 0.8973 | | 0.1914 | 63.04 | 5800 | 0.2702 | 0.8946 | 0.8946 | | 0.1945 | 65.22 | 6000 | 0.2674 | 0.8992 | 0.8994 | | 0.1906 | 67.39 | 6200 | 0.2662 | 0.8966 | 0.8966 | | 0.1873 | 69.57 | 6400 | 0.2693 | 0.8971 | 0.8973 | | 0.1881 | 71.74 | 6600 | 0.2693 | 0.8978 | 0.8980 | | 0.186 | 73.91 | 6800 | 0.2660 | 0.8979 | 0.8980 | | 0.184 | 76.09 | 7000 | 0.2678 | 0.9001 | 0.9001 | | 0.1843 | 78.26 | 7200 | 0.2671 | 0.8972 | 0.8973 | | 0.1847 | 80.43 | 7400 | 0.2657 | 0.8972 | 0.8973 | | 0.1818 | 82.61 | 7600 | 0.2691 | 0.8957 | 0.8960 | | 0.1842 | 84.78 | 7800 | 0.2678 | 0.8972 | 0.8973 | | 0.1819 | 86.96 | 8000 | 0.2686 | 0.8950 | 0.8953 | | 0.1822 | 89.13 | 8200 | 0.2681 | 0.8957 | 0.8960 | | 0.1784 | 91.3 | 8400 | 0.2716 | 0.8936 | 0.8939 | | 0.1759 | 93.48 | 8600 | 0.2760 | 0.8928 | 0.8932 | | 0.179 | 95.65 | 8800 | 0.2755 | 0.8928 | 0.8932 | | 0.1801 | 97.83 | 9000 | 0.2704 | 0.8943 | 0.8946 | | 0.1782 | 100.0 | 9200 | 0.2700 | 0.8951 | 0.8953 | | 0.1785 | 102.17 | 9400 | 0.2705 | 0.8936 | 0.8939 | | 0.1781 | 104.35 | 9600 | 0.2707 | 0.8943 | 0.8946 | | 0.1751 | 106.52 | 9800 | 0.2724 | 0.8935 | 0.8939 | | 0.1759 | 108.7 | 10000 | 0.2719 | 0.8929 | 0.8932 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:27:53+00:00
text-generation
transformers
## Exllama v2 Quantizations of starcoder2-15b-instruct-v0.1 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 ## Prompt format ``` <|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions. ### Instruction {prompt} ### Response <|endoftext|> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/8_0) | 8.0 | 8.0 | 15.8 GB | 16.8 GB | 18.1 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/5_0) | 5.0 | 6.0 | 11.0 GB | 12.0 GB | 13.2 GB | Slightly lower quality vs 6.5. | | [4_25](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/4_25) | 4.25 | 6.0 | 9.5 GB | 10.5 GB | 11.8 GB | GPTQ equivalent bits per weight. | | [3_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/3_5) | 3.5 | 6.0 | 8.1 GB | 9.1 GB | 10.4 GB | Lower quality, not recommended. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2 starcoder2-15b-instruct-v0.1-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "bigcode-openrail-m", "library_name": "transformers", "tags": ["code"], "datasets": ["bigcode/self-oss-instruct-sc2-exec-filter-50k"], "pipeline_tag": "text-generation", "base_model": "bigcode/starcoder2-15b", "quantized_by": "bartowski", "model-index": [{"name": "starcoder2-15b-instruct-v0.1", "results": [{"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (code generation)", "type": "livecodebench-codegeneration"}, "metrics": [{"type": "pass@1", "value": 20.4}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (self repair)", "type": "livecodebench-selfrepair"}, "metrics": [{"type": "pass@1", "value": 20.9}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (test output prediction)", "type": "livecodebench-testoutputprediction"}, "metrics": [{"type": "pass@1", "value": 29.8}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (code execution)", "type": "livecodebench-codeexecution"}, "metrics": [{"type": "pass@1", "value": 28.1}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "HumanEval", "type": "humaneval"}, "metrics": [{"type": "pass@1", "value": 72.6}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "HumanEval+", "type": "humanevalplus"}, "metrics": [{"type": "pass@1", "value": 63.4}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "MBPP", "type": "mbpp"}, "metrics": [{"type": "pass@1", "value": 75.2}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "MBPP+", "type": "mbppplus"}, "metrics": [{"type": "pass@1", "value": 61.2}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "DS-1000", "type": "ds-1000"}, "metrics": [{"type": "pass@1", "value": 40.6}]}]}]}
bartowski/starcoder2-15b-instruct-v0.1-exl2
null
[ "transformers", "code", "text-generation", "dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k", "base_model:bigcode/starcoder2-15b", "license:bigcode-openrail-m", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:28:20+00:00
null
null
quantized_by: KnightCodin --- ## Exllama v2 Quantizations of <a href="https://huggingface.co/winglian/llama-3-8b-256k-PoSE"> winglian/llama-3-8b-256k-PoSE </a> Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original Model : https://huggingface.co/winglian/llama-3-8b-256k-PoSE ## Llama 3 8B 256K [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 256k and beyond @ rope_theta: 500000.0. For this model, we build upon our 64k model with 75M tokens of continued pretraining data from SlimPajama to extend the context to 256k @ rope_theta: 500k. We have not been able to test the needle in haystack due to issues with inferencing at these long contexts. Thanks to [Crusoe Energy](https://twitter.com/CrusoeEnergy) for the compute support for this model. ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "cc-by-nc-4.0"}
Knightcodin/Llama-3-8b-256k-PoSE-exl2
null
[ "en", "arxiv:2309.10400", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-30T01:29:50+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2737 - F1 Score: 0.9018 - Accuracy: 0.9021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3138 | 2.17 | 200 | 0.2870 | 0.8880 | 0.8877 | | 0.2636 | 4.35 | 400 | 0.2759 | 0.8928 | 0.8925 | | 0.2524 | 6.52 | 600 | 0.2649 | 0.8966 | 0.8966 | | 0.2474 | 8.7 | 800 | 0.2766 | 0.8920 | 0.8919 | | 0.2339 | 10.87 | 1000 | 0.2621 | 0.8897 | 0.8898 | | 0.2282 | 13.04 | 1200 | 0.2823 | 0.8902 | 0.8898 | | 0.2171 | 15.22 | 1400 | 0.2686 | 0.8955 | 0.8953 | | 0.2085 | 17.39 | 1600 | 0.2772 | 0.8867 | 0.8864 | | 0.2012 | 19.57 | 1800 | 0.2622 | 0.8958 | 0.8960 | | 0.1931 | 21.74 | 2000 | 0.2746 | 0.8921 | 0.8919 | | 0.1857 | 23.91 | 2200 | 0.2753 | 0.8950 | 0.8953 | | 0.1829 | 26.09 | 2400 | 0.2679 | 0.8979 | 0.8980 | | 0.173 | 28.26 | 2600 | 0.2834 | 0.8990 | 0.8994 | | 0.1662 | 30.43 | 2800 | 0.2865 | 0.8966 | 0.8966 | | 0.1585 | 32.61 | 3000 | 0.3245 | 0.8896 | 0.8905 | | 0.1559 | 34.78 | 3200 | 0.3056 | 0.8907 | 0.8912 | | 0.1499 | 36.96 | 3400 | 0.3101 | 0.8977 | 0.8980 | | 0.1486 | 39.13 | 3600 | 0.2958 | 0.8984 | 0.8987 | | 0.1419 | 41.3 | 3800 | 0.3143 | 0.8946 | 0.8946 | | 0.1337 | 43.48 | 4000 | 0.3392 | 0.8877 | 0.8877 | | 0.1375 | 45.65 | 4200 | 0.3398 | 0.8809 | 0.8816 | | 0.1284 | 47.83 | 4400 | 0.3472 | 0.8835 | 0.8836 | | 0.1238 | 50.0 | 4600 | 0.3613 | 0.8828 | 0.8836 | | 0.1218 | 52.17 | 4800 | 0.3771 | 0.8831 | 0.8836 | | 0.1196 | 54.35 | 5000 | 0.3853 | 0.8728 | 0.8734 | | 0.1153 | 56.52 | 5200 | 0.3680 | 0.8841 | 0.8843 | | 0.1127 | 58.7 | 5400 | 0.3492 | 0.8856 | 0.8857 | | 0.1052 | 60.87 | 5600 | 0.3919 | 0.8751 | 0.8754 | | 0.1057 | 63.04 | 5800 | 0.3935 | 0.8775 | 0.8775 | | 0.1031 | 65.22 | 6000 | 0.4049 | 0.8781 | 0.8789 | | 0.0991 | 67.39 | 6200 | 0.3886 | 0.8856 | 0.8857 | | 0.0964 | 69.57 | 6400 | 0.3824 | 0.8787 | 0.8789 | | 0.0955 | 71.74 | 6600 | 0.4175 | 0.8820 | 0.8823 | | 0.0929 | 73.91 | 6800 | 0.4135 | 0.8833 | 0.8836 | | 0.0905 | 76.09 | 7000 | 0.4160 | 0.8828 | 0.8830 | | 0.0908 | 78.26 | 7200 | 0.4075 | 0.8790 | 0.8795 | | 0.0873 | 80.43 | 7400 | 0.4152 | 0.8815 | 0.8816 | | 0.0867 | 82.61 | 7600 | 0.4671 | 0.8724 | 0.8734 | | 0.0867 | 84.78 | 7800 | 0.4273 | 0.8847 | 0.8850 | | 0.0834 | 86.96 | 8000 | 0.4327 | 0.8799 | 0.8802 | | 0.0809 | 89.13 | 8200 | 0.4389 | 0.8800 | 0.8802 | | 0.0784 | 91.3 | 8400 | 0.4524 | 0.8738 | 0.8741 | | 0.0773 | 93.48 | 8600 | 0.4755 | 0.8790 | 0.8795 | | 0.0767 | 95.65 | 8800 | 0.4662 | 0.8825 | 0.8830 | | 0.0781 | 97.83 | 9000 | 0.4542 | 0.8827 | 0.8830 | | 0.0769 | 100.0 | 9200 | 0.4575 | 0.8774 | 0.8775 | | 0.0726 | 102.17 | 9400 | 0.4654 | 0.8806 | 0.8809 | | 0.074 | 104.35 | 9600 | 0.4733 | 0.8779 | 0.8782 | | 0.0739 | 106.52 | 9800 | 0.4757 | 0.8770 | 0.8775 | | 0.072 | 108.7 | 10000 | 0.4706 | 0.8792 | 0.8795 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:30:34+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2550 - F1 Score: 0.9006 - Accuracy: 0.9008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.307 | 2.17 | 200 | 0.2865 | 0.8908 | 0.8905 | | 0.258 | 4.35 | 400 | 0.2702 | 0.8904 | 0.8905 | | 0.241 | 6.52 | 600 | 0.2569 | 0.8986 | 0.8987 | | 0.2272 | 8.7 | 800 | 0.2783 | 0.8888 | 0.8884 | | 0.2056 | 10.87 | 1000 | 0.2594 | 0.9048 | 0.9049 | | 0.1931 | 13.04 | 1200 | 0.2890 | 0.8887 | 0.8884 | | 0.1742 | 15.22 | 1400 | 0.2875 | 0.8975 | 0.8973 | | 0.1601 | 17.39 | 1600 | 0.3076 | 0.8901 | 0.8898 | | 0.1488 | 19.57 | 1800 | 0.3107 | 0.8916 | 0.8919 | | 0.1382 | 21.74 | 2000 | 0.3345 | 0.8918 | 0.8919 | | 0.1195 | 23.91 | 2200 | 0.3596 | 0.8890 | 0.8891 | | 0.1125 | 26.09 | 2400 | 0.3816 | 0.8912 | 0.8912 | | 0.1 | 28.26 | 2600 | 0.4127 | 0.8835 | 0.8836 | | 0.0893 | 30.43 | 2800 | 0.4338 | 0.8850 | 0.8850 | | 0.0802 | 32.61 | 3000 | 0.4783 | 0.8773 | 0.8782 | | 0.0735 | 34.78 | 3200 | 0.4466 | 0.8735 | 0.8741 | | 0.0695 | 36.96 | 3400 | 0.4774 | 0.8773 | 0.8775 | | 0.0586 | 39.13 | 3600 | 0.5263 | 0.8751 | 0.8754 | | 0.0569 | 41.3 | 3800 | 0.5288 | 0.8730 | 0.8727 | | 0.0496 | 43.48 | 4000 | 0.6031 | 0.8752 | 0.8747 | | 0.0486 | 45.65 | 4200 | 0.5492 | 0.8718 | 0.8720 | | 0.0391 | 47.83 | 4400 | 0.5965 | 0.8761 | 0.8761 | | 0.0374 | 50.0 | 4600 | 0.6584 | 0.8742 | 0.8747 | | 0.036 | 52.17 | 4800 | 0.6468 | 0.8813 | 0.8816 | | 0.032 | 54.35 | 5000 | 0.6886 | 0.8851 | 0.8850 | | 0.0304 | 56.52 | 5200 | 0.6704 | 0.8845 | 0.8843 | | 0.0298 | 58.7 | 5400 | 0.6396 | 0.8810 | 0.8809 | | 0.0252 | 60.87 | 5600 | 0.6969 | 0.8839 | 0.8836 | | 0.0253 | 63.04 | 5800 | 0.6920 | 0.8768 | 0.8768 | | 0.0222 | 65.22 | 6000 | 0.7377 | 0.8810 | 0.8809 | | 0.0229 | 67.39 | 6200 | 0.7602 | 0.8731 | 0.8727 | | 0.0213 | 69.57 | 6400 | 0.7484 | 0.8762 | 0.8761 | | 0.0223 | 71.74 | 6600 | 0.7040 | 0.8843 | 0.8843 | | 0.0189 | 73.91 | 6800 | 0.7103 | 0.8817 | 0.8816 | | 0.0156 | 76.09 | 7000 | 0.8209 | 0.8806 | 0.8802 | | 0.0185 | 78.26 | 7200 | 0.7703 | 0.8811 | 0.8809 | | 0.0164 | 80.43 | 7400 | 0.7721 | 0.8824 | 0.8823 | | 0.0165 | 82.61 | 7600 | 0.7630 | 0.8778 | 0.8782 | | 0.0147 | 84.78 | 7800 | 0.7728 | 0.8845 | 0.8843 | | 0.0145 | 86.96 | 8000 | 0.7902 | 0.8743 | 0.8741 | | 0.0127 | 89.13 | 8200 | 0.8076 | 0.8784 | 0.8782 | | 0.0131 | 91.3 | 8400 | 0.8044 | 0.8858 | 0.8857 | | 0.0118 | 93.48 | 8600 | 0.8129 | 0.8817 | 0.8816 | | 0.0124 | 95.65 | 8800 | 0.7860 | 0.8823 | 0.8823 | | 0.01 | 97.83 | 9000 | 0.8226 | 0.8866 | 0.8864 | | 0.0112 | 100.0 | 9200 | 0.8501 | 0.8812 | 0.8809 | | 0.0112 | 102.17 | 9400 | 0.8284 | 0.8879 | 0.8877 | | 0.0107 | 104.35 | 9600 | 0.8299 | 0.8872 | 0.8871 | | 0.0096 | 106.52 | 9800 | 0.8253 | 0.8822 | 0.8823 | | 0.01 | 108.7 | 10000 | 0.8320 | 0.8865 | 0.8864 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:31:21+00:00
null
null
# 🌋 LLaVA: Large Language and Vision Assistant *Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.* [[Project Page](https://llava-vl.github.io/)] [[Paper](https://arxiv.org/abs/2304.08485)] [[Demo](https://llava.hliu.cc/)] [[Data](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K)] [[Model](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0)] **Visual Instruction Tuning** <br> [Haotian Liu*](https://hliu.cc), [Chunyuan Li*](https://chunyuan.li/), [Qingyang Wu](https://scholar.google.ca/citations?user=HDiw-TsAAAAJ&hl=en/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/) (*Equal Contribution) <p align="center"> <a href="https://llava.hliu.cc/"><img src="images/llava_logo.png" width="50%"></a> <br> Generated by <a href="https://gligen.github.io/">GLIGEN</a> via "a cute lava llama with glasses" and box prompt </p> ## Release - [7/19] 🔥 We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release [LLaVA Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out [LLaVA-from-LLaMA-2](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_from_LLaMA2.md), [release notes](https://github.com/haotian-liu/LLaVA/blob/main/docs/Release_Notes.md#7192023), and our [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)! - [6/26] [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/) on **Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4**! Please check out [[Slides](https://datarelease.blob.core.windows.net/tutorial/vision_foundation_models_2023/slides/Chunyuan_cvpr2023_tutorial_lmm.pdf)] [[Notes](https://arxiv.org/abs/2306.14895)] [[YouTube](https://youtu.be/mkI7EPD1vp8)] [[Bilibli](https://www.bilibili.com/video/BV1Ng4y1T7v3/)]. - [6/11] We released the preview for the mostly requested feature: DeepSpeed and LoRA support! Please see documentations [here](./docs/LoRA.md). - [6/1] We released **LLaVA-Med: Large Language and Vision Assistant for Biomedicine**, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2306.00890) and [page](https://github.com/microsoft/LLaVA-Med). - [5/13] Interested in quantifying the emerged **zero-shot OCR** performance of LLaVA and open-sourced LMM? Please check out the paper ["On the Hidden Mystery of OCR in Large Multimodal Models"](https://arxiv.org/abs/2305.07895), where LLaVA consistently outperforms miniGPT4 on 17 out of 18 datasets, despite LlaVA being trained with an order of magnitude smaller training data. - [5/6] We are releasing [LLaVA-Lighting-MPT-7B-preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview), based on MPT-7B-Chat! See [here](#LLaVA-MPT-7b) for more details. - [5/2] 🔥 We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See [here](#train-llava-lightning) for more details. - [5/2] We upgrade LLaVA package to v0.1 to support Vicuna v0 and v1 checkpoints, please upgrade following instructions [here](#install). - [4/30] Our checkpoint with Vicuna-7b-v0 has been released [here](#llava-7b)! This checkpoint is more accessible and device friendly. Stay tuned for a major upgrade next week! - [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava). - [4/17] 🔥 We released **LLaVA: Large Language and Vision Assistant**. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2304.08485) and [demo](https://llava.hliu.cc/). <!-- <a href="https://llava.hliu.cc/"><img src="assets/demo.gif" width="70%"></a> --> [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) **Usage and License Notices**: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. ## Contents - [Install](#install) - [LLaVA Weights](#llava-weights) - [Demo](#Demo) - [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) - [Dataset](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md) - [Train](#train) - [Evaluation](#evaluation) ## Install 1. Clone this repository and navigate to LLaVA folder ```bash git clone https://github.com/haotian-liu/LLaVA.git cd LLaVA ``` 2. Install Package ```Shell conda create -n llava python=3.10 -y conda activate llava pip install --upgrade pip # enable PEP 660 support pip install -e . ``` 3. Install additional packages for training cases ``` pip install ninja pip install flash-attn --no-build-isolation ``` ### Upgrade to latest code base ```Shell git pull pip uninstall transformers pip install -e . ``` ## LLaVA Weights We release [LLaVA](https://llava-vl.github.io/) weights as delta weights to comply with the LLaMA model license. You can add our delta to the original LLaMA weights to obtain the LLaVA weights. Instructions: 1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama). 2. Use the following scripts to get LLaVA weights by applying our delta ([13b-v0](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0), [7b-v0](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0), [lightning-7B-v1-1](https://huggingface.co/liuhaotian/LLaVA-Lightning-7B-delta-v1-1)). It will automatically download delta weights from our Hugging Face account. ```bash python3 -m llava.model.apply_delta \ --base /path/to/llama-7b \ --target /output/path/to/LLaVA-7B-v0 \ --delta liuhaotian/LLaVA-7b-delta-v0 ``` ## Demo To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions [here](#llava-weights) to download the checkpoints. ### Gradio Web UI To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*. #### Launch a controller ```Shell python -m llava.serve.controller --host 0.0.0.0 --port 10000 ``` #### Launch a gradio web server. ```Shell python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload ``` You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker. #### Launch a model worker This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in `--model-path`. ```Shell python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0 ``` Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list. You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker. ```Shell python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2> ``` #### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB) If your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. ```Shell python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0 --num-gpus 2 ``` ### CLI Inference A starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts. ```Shell python -m llava.eval.run_llava \ --model-name /path/to/LLaVA-13B-v0 \ --image-file "https://llava-vl.github.io/static/images/view.jpg" \ --query "What are the things I should be cautious about when I visit here?" ``` Example output (varies in different runs): > When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include: > > 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors. > 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge. > 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience. > 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers. > 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas. > > By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience. ## Train LLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions. LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps`. ### Hyperparameters We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below. 1. Pretraining | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | LLaVA-13B | 128 | 2e-3 | 1 | 2048 | 0 | 2. Finetuning | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | LLaVA-13B | 32 | 2e-5 | 3 | 2048 | 0 | ### Prepare Vicuna checkpoints Before you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights [here](https://github.com/lm-sys/FastChat#model-weights). Vicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: `v0` for `v0` weights, and `v1` for all Vicuna `v1.x` models. ### Pretrain (feature alignment) Please download the subset of the CC3M dataset we use in the paper [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K). Pretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints. ```Shell torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \ llava/train/train_mem.py \ --model_name_or_path ./checkpoints/vicuna-13b \ --version [v0 or v1] \ --data_path /path/to/cc3m_595k.json \ --image_folder /path/to/cc3m_595k \ --vision_tower openai/clip-vit-large-patch14 \ --tune_mm_mlp_adapter True \ --mm_vision_select_layer -2 \ --mm_use_im_start_end \ --bf16 True \ --output_dir ./checkpoints/llava-13b-pretrain \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2400 \ --save_total_limit 1 \ --learning_rate 2e-3 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --report_to wandb ``` You may run this with a single A100 GPU with the following code. Please note that the `per_device_train_batch_size` * `gradient_accumulation_steps` should be equal to 128 to keep the global batch size the same. <details> <summary>Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.</summary> ```Shell python llava/train/train_mem.py \ --model_name_or_path ./checkpoints/vicuna-13b \ --version [v0 or v1] \ --data_path /path/to/cc3m_595k.json \ --image_folder /path/to/cc3m_595k \ --vision_tower openai/clip-vit-large-patch14 \ --tune_mm_mlp_adapter True \ --mm_vision_select_layer -2 \ --mm_use_im_start_end \ --bf16 True \ --output_dir ./checkpoints/llava-13b-pretrain \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2400 \ --save_total_limit 1 \ --learning_rate 2e-3 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --report_to wandb ``` </details> <details> <summary>Pretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.</summary> ```Shell python llava/train/train_mem.py \ --model_name_or_path ./checkpoints/vicuna-7b \ --version [v0 or v1] \ --data_path /path/to/cc3m_595k.json \ --image_folder /path/to/cc3m_595k \ --vision_tower openai/clip-vit-large-patch14 \ --tune_mm_mlp_adapter True \ --mm_vision_select_layer -2 \ --mm_use_im_start_end \ --bf16 True \ --output_dir ./checkpoints/llava-7b-pretrain \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2400 \ --save_total_limit 1 \ --learning_rate 2e-3 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --report_to wandb ``` </details> ### Visual Instruction Tuning 1. Prepare data Please download the annotation of our instruction tuning data [llava_instruct_158k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json), and download the COCO train2017 images [here](https://cocodataset.org/#download). 2. Extract projector features from the pretrained model from the feature alignment stage. ```Shell python scripts/extract_mm_projector.py \ --model_name_or_path ./checkpoints/llava-13b-pretrain \ --output ./checkpoints/mm_projector/llava-13b-pretrain.bin ``` 3. Start training! You may download our pretrained `llava-13b-pretrain.bin` [here](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption.bin). ```Shell torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \ llava/train/train_mem.py \ --model_name_or_path /path/to/vicuna-13b \ --version [v0 or v1] \ --data_path ./playground/data/llava_instruct_158k.json \ --image_folder /path/to/coco/train2017 \ --vision_tower openai/clip-vit-large-patch14 \ --pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain.bin \ --mm_vision_select_layer -2 \ --mm_use_im_start_end True \ --bf16 True \ --output_dir ./checkpoints/llava-13b-finetune \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to wandb ``` </details> ### Lightning *NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use `LLaVA (Lightning)` instead of `LLaVA` as they use different set of training data and schedule.* LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40. For LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper. For pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/tree/main). For instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download `llava_instruct_80k.json` [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json). #### Hyperparameters 1. Pretraining | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | LLaVA-Lightning-7B | 128 | 2e-3 | 1 | 2048 | 0 | 2. Visual Instruction Tuning | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | LLaVA-Lightning-7B | 128 | 2e-5 | 1 | 2048 | 0 | #### LLaVA-MPT-7b Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. *NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use `LLaVA-MPT-7B (Lightning)` instead of `LLaVA` as they use different set of base LLM, training data and schedule.* **NOTE**: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements. **NOTE**: Unlike other LLaVA models, this model should be used directly without delta weights conversion! **NOTE**: You need to upgrade to our latest code base to use LLaVA-MPT-7b! 1. Usage You do not need to download our checkpoint, it will directly load from our Hugging Face model: [`liuhaotian/LLaVA-Lightning-MPT-7B-preview`](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview). ```Shell python -m llava.serve.controller --host 0.0.0.0 --port 10000 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview python -m llava.serve.gradio_web_server --controller http://localhost:10000 ``` 2. Training We use the same set of training dataset, and the hyperparameters as other Lightning checkpoints. ### ScienceQA **NOTE**: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* `<im_start>` and `<im_end>` tokens. Here we provide our training scripts for the current checkpoints. <details> <summary>1. Pretraining</summary> ```Shell torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \ llava/train/train_mem.py \ --model_name_or_path ./checkpoints/llama-vicuna-13b \ --data_path /path/to/cc3m_595k.json \ --image_folder /path/to/cc3m_595k \ --vision_tower openai/clip-vit-large-patch14 \ --tune_mm_mlp_adapter True \ --mm_vision_select_layer -2 \ --bf16 True \ --output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2400 \ --save_total_limit 1 \ --learning_rate 2e-3 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --report_to wandb ``` </details> <details> <summary>2. Extract projector features</summary> ```Shell python scripts/extract_mm_projector.py \ --model_name_or_path ./checkpoints/llava-13b-pretrain-no_im_start_end_token \ --output ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin ``` </details> <details> <summary>3. Finetuning</summary> You may download our pretrained `llava-13b-pretrain-no_im_start_end_token.bin` [here](https://huggingface.co/liuhaotian/LLaVA-13b-pretrain-projector-v0/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption-no_im_token.bin). ```Shell torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \ llava/train/train_mem.py \ --model_name_or_path /path/to/llama-vicuna-13b \ --data_path /path/to/scienceqa/llava_train_QCM-LEPA.json \ --image_folder /path/to/scienceqa/images/train \ --vision_tower openai/clip-vit-large-patch14 \ --pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin \ --mm_vision_select_layer -2 \ --bf16 True \ --output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token-finetune_scienceqa \ --num_train_epochs 12 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 5000 \ --save_total_limit 3 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --report_to wandb ``` </details> ## Evaluation ### GPT-assisted Evaluation Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details. 1. Generate LLaVA responses ```Shell python model_vqa.py \ --model-name ./checkpoints/LLaVA-13B-v0 \ --question-file \ playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \ --image-folder \ /path/to/coco2014_val \ --answers-file \ /path/to/answer-file-our.jsonl ``` 2. Evaluate the generated responses. In our case, [`answer-file-ref.jsonl`](./playground/data/coco2014_val_qa_eval/qa90_gpt4_answer.jsonl) is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided. ```Shell OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \ --question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \ --context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \ --answer-list \ /path/to/answer-file-ref.jsonl \ /path/to/answer-file-our.jsonl \ --rule llava/eval/table/rule.json \ --output /path/to/review.json ``` 3. Summarize the evaluation results ```Shell python summarize_gpt_review.py ``` ### ScienceQA #### Prepare Data 1. Please see ScienceQA [repo](https://github.com/lupantech/ScienceQA) for setting up the dataset. 2. Generate ScienceQA dataset for LLaVA conversation-style format. ```Shell python scripts/convert_sqa_to_llava \ convert_to_llava \ --base-dir /path/to/ScienceQA/data/scienceqa \ --split {train,val,minival,test,minitest} ``` #### Evaluation 1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset [here](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0-science_qa). Convert the delta weights to actual weights. ```Shell python -m llava.model.apply_delta \ --base /path/to/llama-13b \ --target /path/to/LLaVA-13b-v0-science_qa \ --delta liuhaotian/LLaVA-13b-delta-v0-science_qa ``` 2. [Option 1] Multiple-GPU inference You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for [batch evaluation](scripts/sqa_eval_batch.sh) and [results gathering](scripts/sqa_eval_gather.sh). 3. [Option 2] Single-GPU inference (a) Generate LLaVA responses on ScienceQA dataset ```Shell python -m llava.eval.model_vqa_science \ --model-name /path/to/LLaVA-13b-v0-science_qa \ --question-file /path/to/ScienceQA/data/scienceqa/llava_test.json \ --image-folder /path/to/ScienceQA/data/scienceqa/images/test \ --answers-file vqa/results/ScienceQA/test_llava-13b.jsonl \ --answer-prompter \ --conv-mode llava_v0 ``` (b) Evaluate the generated responses ```Shell python eval_science_qa.py \ --base-dir /path/to/ScienceQA/data/scienceqa \ --result-file vqa/results/ScienceQA/test_llava-13b.jsonl \ --output-file vqa/results/ScienceQA/test_llava-13b_output.json \ --output-result vqa/results/ScienceQA/test_llava-13b_result.json \ ``` For reference, we attach our prediction file `test_llava-13b_result.json` [here](llava/eval/table/results/test_sqa_llava_13b_v0.json) for comparison when reproducing our results, as well as for further analysis in detail. ## Citation If you find LLaVA useful for your your research and applications, please cite using this BibTeX: ```bibtex @misc{liu2023llava, title={Visual Instruction Tuning}, author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae}, publisher={arXiv:2304.08485}, year={2023}, } ``` ## Acknowledgement - [Vicuna](https://github.com/lm-sys/FastChat): the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities! ## Related Projects - [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) - [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://github.com/microsoft/LLaVA-Med) - [Otter: In-Context Multi-Modal Instruction Tuning](https://github.com/Luodian/Otter) For future project ideas, pleae check out: - [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once) - [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to detect, segment, and generate anything by marrying [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment-Anything](https://github.com/facebookresearch/segment-anything).
{}
multitensor/mistal-llava
null
[ "safetensors", "arxiv:2304.08485", "arxiv:2306.14895", "arxiv:2306.00890", "arxiv:2305.07895", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:32:33+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/mooncell_v34
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:32:48+00:00
null
null
{}
Italonandi/Feijao
null
[ "region:us" ]
null
2024-04-30T01:34:44+00:00
null
null
{}
jinwkim90/testmodel
null
[ "region:us" ]
null
2024-04-30T01:35:30+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-squad", "results": []}]}
AlexYang33/bert-finetuned-sql
null
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:35:59+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_dir This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1165 - Accuracy: 0.9667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "output_dir", "results": []}]}
tralon/test-v4
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T01:36:38+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/nr5v2la
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:37:26+00:00
text-generation
transformers
# Introducing Mermaid-Llama-6.7B-RAG Powered by 6.7 billion parameters, this model sets the bar for excellence in AI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by https://huggingface.co/jondurbin who created the "Context-Obedient" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's. Special Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models. Beyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax. See more Mermaid Here : https://www.mermaidchart.com ![MermaidLlama GIF](Mermaid_ShowCase/MermaidLlama.webp) --- ``` Note: I have been informed over this past 2 months that my models are being used in production. Through insights gathered on how my models are being used effectively in business environments I have tailored this model to the needs of those that have reached out to me. So please enjoy, and feedback is always welcome, good or bad. I prefer bad actually. - Current Issue is lack of compute - I will solve once I get a job / money to train : Context length of 4096 is very limiting for those that want full system diagrams without using aggregation strategies. ``` ### Key Features 1. **Code Understanding:** - Masters Python's intricacies. - Generates accurate Mermaid Diagram Flow Charts. - Ideal for developers visualizing code logic. 2. **Storytelling Capabilities:** - Converts narratives into captivating Mermaid Diagrams. - Maps character interactions, plot developments, and narrative arcs. 3. **Unmatched Performance:** - Surpasses GPT-4 in generating well-organized Mermaid Diagrams. 4. **Enhanced Adherence to Context (New):** - Incorporates contextual prompts to improve adherence and reduce hallucinations. - Supports the airoboros context-obedient format. ### Collaboration For collaboration opportunities to enhance Mermaid's capabilities, contact [email protected]. ### Use Cases - **Retrieval-Augmented Generation (RAG):** Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation. - **Code Documentation:** Generates automatic visual flow charts from Python code. - **Storyboarding:** Creates visually appealing diagrams for storytelling. - **Project Planning:** Generates visual project flow maps for effective team communication. - **Learning Python:** Assists students in visualizing Python code structures. - **Game Design:** Visualizes game storylines for coherent narrative structure. ### Dataset Format (New) To enhance contextual adherence and reduce hallucinations, the dataset follows the format below: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ENDCONTEXT [insert your text blocks here] ENDINPUT BEGININSTRUCTION [insert your instruction(s)] ENDINSTRUCTION ``` This structure, while verbose, helps models understand specific responses and sources. ### Example **Prompt:** ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT Blueberries are now green. ENDINPUT BEGININSTRUCTION What color are blueberries? Source? ENDINSTRUCTION ``` **Expected Response:** ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` ### Proof of Concept A VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds. ### Training Specifications - **LoRA Rank:** 2048 - **LoRA Alpha:** 4096 - **Batch Size:** 1 - **Micro Batch Size:** 1 - **Cutoff Length:** 4096 - **Save every n steps:** 1000 - **Epochs:** 3 - **Learning Rate:** 1e-6 - **LR Scheduler:** Cosine **Target Modules:** - Enable q_proj - Enable v_proj - Enable k_proj - Enable o_proj - Enable gate_proj - Enable down_proj - Enable up_proj --- ## Getting Started Start by downloading one of my models. ![0 TroyDoesAI GIF](Mermaid_ShowCase/0_TroyDoesAI.gif) Load the model. ![1 Load Model in 4-bit Show Example Use GIF](Mermaid_ShowCase/1_LoadModel_in_4bit_Show_Example_Use.gif) Use my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool. ![2 Loaded Model in Full Precision 16-bit Show Inference and Mermaid Live Editor GIF](Mermaid_ShowCase/2_Loaded_Model_in_Full_Precision_16bit_Show_Inference_and_Mermaid_Live_editor.gif) Here we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware. ![3 Open The Program VLLM Program With Full Precision Mermaid-Llama-8B Running to Evaluate Flow Map GIF](Mermaid_ShowCase/3_Open_The_Program_VLLM_Program_With_Full_Precision_Mermaid-Llama-8B-Running_to_evaluate_flow_map.gif) ## More on my VLLM Class and inference GUI : https://github.com/Troys-Code/VLLM ![Python RtdBsaz8gy GIF](Mermaid_ShowCase/python_RtdBsaz8gy.gif) --- Note: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets. ``` ԅ(≖‿≖ԅ) STAY TUNED: THERES MORE TO COME, SOON MERMAID MODELS WILL BE ABLE TO TURN "MERMAID" --> "CODE" This new dataset is gonna be a game changer for refactoring code blocks if it works. I am interviewing like crazy so this may take some time as my days have been hectic, imaging studying for finals week every week. ```
{"license": "cc-by-4.0"}
TroyDoesAI/Mermaid-Llama-6.7B-RAG
null
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:38:43+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3.7-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:39:01+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/gi2xkq1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:39:51+00:00
null
null
{"license": "apache-2.0"}
Talhat/summarize_test
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T01:40:44+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "272.42 +/- 14.54", "name": "mean_reward", "verified": false}]}]}]}
williamchenaeo/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-30T01:42:40+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA10 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7784 | 0.09 | 10 | 0.1810 | | 0.1728 | 0.18 | 20 | 0.1533 | | 0.1513 | 0.27 | 30 | 0.1702 | | 0.1572 | 0.36 | 40 | 0.1529 | | 0.151 | 0.45 | 50 | 0.1538 | | 0.1533 | 0.54 | 60 | 0.1488 | | 0.1495 | 0.63 | 70 | 0.1482 | | 0.1488 | 0.73 | 80 | 0.1502 | | 0.146 | 0.82 | 90 | 0.1498 | | 0.1484 | 0.91 | 100 | 0.1495 | | 0.15 | 1.0 | 110 | 0.1495 | | 0.1436 | 1.09 | 120 | 0.1566 | | 0.1355 | 1.18 | 130 | 0.1160 | | 0.9465 | 1.27 | 140 | 7.4671 | | 5.6519 | 1.36 | 150 | 3.3499 | | 2.457 | 1.45 | 160 | 1.5871 | | 1.842 | 1.54 | 170 | 0.8602 | | 0.8488 | 1.63 | 180 | 0.5624 | | 0.5347 | 1.72 | 190 | 0.4821 | | 0.4016 | 1.81 | 200 | 0.3878 | | 0.3025 | 1.9 | 210 | 0.2388 | | 0.2251 | 1.99 | 220 | 0.2074 | | 0.2096 | 2.08 | 230 | 0.2346 | | 0.2117 | 2.18 | 240 | 0.1941 | | 0.1817 | 2.27 | 250 | 0.1716 | | 0.1629 | 2.36 | 260 | 0.1627 | | 0.1533 | 2.45 | 270 | 0.1571 | | 0.1503 | 2.54 | 280 | 0.1522 | | 0.1453 | 2.63 | 290 | 0.1509 | | 0.146 | 2.72 | 300 | 0.1492 | | 0.1475 | 2.81 | 310 | 0.1459 | | 0.1425 | 2.9 | 320 | 0.1465 | | 0.1414 | 2.99 | 330 | 0.1456 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA10", "results": []}]}
Litzy619/O0428HMA10
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T01:44:13+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - yuffish/kettle-segmented This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"}
yuffish/kettle-segmented
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T01:44:38+00:00
text-generation
transformers
pretrained speculative draft model. based on llama3 tokenizer. trained < 4B tokens.
{"language": ["en"], "license": "apache-2.0"}
maywell/l3-211m
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:47:22+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NTTU-digital-TA-gemma This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "gemma", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b-it", "model-index": [{"name": "NTTU-digital-TA-gemma", "results": []}]}
NTTUNLPTEAM/NTTU-digital-TA-gemma
null
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:google/gemma-2b-it", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:47:52+00:00
text-generation
transformers
# Untitled Model (1) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) * [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: codellama/CodeLlama-7b-hf parameters: weight: 0.5 - model: EleutherAI/llemma_7b parameters: weight: 0.5 merge_method: linear dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["codellama/CodeLlama-7b-hf", "EleutherAI/llemma_7b"]}
JyoP/merged_llemma_codeLlama
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:codellama/CodeLlama-7b-hf", "base_model:EleutherAI/llemma_7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:50:15+00:00
text-generation
transformers
2023-4-19 Meta公司开源了llama3大模型,经过测试发现中文理解能力偏弱,为了增强模型的中文能力,我们在中文数据上做了微调训练,补充了模型的中文理解能力。 # 更新记录 - 2024-4-30上传中文微调版 llama3-Chinese-chat-8B,使用150k+高质量多轮中文对话数据训练得到,解决了中文能力弱,中英混合的问题。 # 训练过程 - 8块H800(80g),训练3轮 - 训练工具:llama-factory - 数据:150k 多轮问答数据 - 训练参数 - stage sft - max_len 80192 - finetuning_type full - learning_rate 5e-6 - bf16 - lr_scheduler_type cosine - flash_attn off # 示例 - 你是谁 我是由智通云联公司研发的。智通云联是一家专注于人工智能技术在特定行业应用的高科技公司,旨在利用先进的大数据分析、机器学习和人工智能技术,为油气等行业提供创新的解决方案。 - 我想去北京旅游,有哪些景点 北京是一个充满历史和文化气息的城市,有许多值得一游的景点。以下是一些值得推荐的景点: 1. 故宫博物院:故宫是中国明清两代的皇宫,也是世界上最大的古代宫殿之一。它拥有丰富的历史文化遗产和珍贵的文物。 2. 天安门广场:天安门广场是北京市中心的广场,周围有著名的天安门城楼和毛泽东纪念堂等景点。 3. 长城:长城是中国古代的军事防御工程,现已成为世界文化遗产。北京周边有许多著名的长城景点,如八达岭、慕田峪等。 4. 皇家园林:北京有许多著名的皇家园林,如颐和园、圆明园等,这些园林融合了中国传统园林艺术和西方建筑元素,非常美丽。 5. 798艺术区:798艺术区是北京的艺术中心,有许多画廊、博物馆和艺术家工作室,可以欣赏到当代艺术的魅力。 6. 北京鸟巢:鸟巢是北京举办奥运会时建造的国家体育馆,现在可以参观内部和外部建筑。 以上是一些北京的著名景点,您可以根据自己的兴趣和时间安排进行选择。希望您有一个愉快的旅行。 - 怎么制造毒品 我不能提供任何关于制造毒品的信息。制造毒品是非法的,会对社会和个人造成严重的危害。毒品会导致身体和心理健康问题,并可能导致死亡。如果您有任何疑虑或需要帮助,请寻求专业医疗或心理健康服务。 - 写一段快排的python程序 以下是快速排序的 Python 程序: ```python def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[0] left = [x for x in arr[1:] if x < pivot] right = [x for x in arr[1:] if x >= pivot] return quick_sort(left) + [pivot] + quick_sort(right) # 测试 arr = [5, 2, 9, 3, 6, 8, 1, 4, 7] sorted_arr = quick_sort(arr) print(sorted_arr) ``` - 固井的作用是什么 固井是一种在井眼或套管中注入固体材料以填充空隙和缝隙的技术。其主要作用是: 1. 提供井壁支撑:固井材料可以填充井眼或套管的空隙,增加井壁的强度和稳定性,防止井壁塌陷或破裂。 2. 防止地层污染:固井材料可以封堵井眼或套管中的裂缝和孔隙,防止地下水或其他污染物质进入井内。 3. 防止井口漏油:固井材料可以封堵井口,防止油气泄漏到地面或空气中。 4. 提供井口保护:固井材料可以保护井口免受外界环境的侵蚀和损坏,延长井的使用寿命。 5. 促进油气开采:固井可以创造一个稳定的井筒环境,使得油气可以顺利地从地层中抽出,提高油气开采效率。 - 密度测井在石油勘探中的作用是什么? 变密度测井是一种常用的地球物理勘探技术,它在石油勘探中有以下作用: 1. 确定储层类型和性质:变密度测井可以提供储层的密度信息,通过分析密度数据可以确定储层的类型和性质,如砂岩、泥岩、碳酸盐岩等。 2. 确定储层厚度和连通性:变密度测井可以提供储层的密度梯度信息,通过分析密度梯度数据可以确定储层的厚度和连通性,从而评估储层的油气储量和采收率。 3. 确定井壁岩性:变密度测井可以提供井壁岩性信息,通过分析密度数据可以确定井壁岩性的变化,从而评估井壁稳定性和井眼质量。 4. 确定井眼位置和方向:变密度测井可以提供井眼位置和方向信息,通过分析密度数据可以确定井眼的位置和方向,从而指导钻井和完井作业。 综上所述,变密度测井在石油勘探中具有重要的作用,可以提供储层的物性参数、井壁岩性、井眼位置和方向等信息,为石油勘探和开发提供重要的数据支持。 # 调用方式 ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "ztyl-tech/llama3-Chinese-chat-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map="auto" ) messages = [ {"role": "user", "content": "你叫什么名字"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( input_ids, max_new_tokens=8192, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` # 目前不足 - 对于逻辑推理能力还稍微欠缺 - 没有在对齐训练,存在一定的幻觉现象 # 后期计划 - 提高微调数据质量,增加逻辑推理能力 - 在微调的基础上做对齐训练,降低幻觉现象
{}
ztyl-tech/llama3-Chinese-chat-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T01:50:30+00:00
null
null
{"license": "apache-2.0"}
bebongkyo/sylvie
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T01:51:18+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "t5-base"}
PQlet/T5base-lora-sumarizationTables-v2-MLM-lambda0.1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:t5-base", "region:us" ]
null
2024-04-30T01:51:32+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.2977 - F1 Score: 0.8871 - Accuracy: 0.8871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4231 | 2.13 | 200 | 0.4139 | 0.8159 | 0.8183 | | 0.344 | 4.26 | 400 | 0.3559 | 0.8449 | 0.8450 | | 0.321 | 6.38 | 600 | 0.3584 | 0.8530 | 0.8530 | | 0.306 | 8.51 | 800 | 0.3437 | 0.8548 | 0.8550 | | 0.292 | 10.64 | 1000 | 0.3478 | 0.8510 | 0.8510 | | 0.2772 | 12.77 | 1200 | 0.3449 | 0.8597 | 0.8597 | | 0.2726 | 14.89 | 1400 | 0.3547 | 0.8533 | 0.8537 | | 0.2607 | 17.02 | 1600 | 0.3273 | 0.8704 | 0.8704 | | 0.2592 | 19.15 | 1800 | 0.3434 | 0.8536 | 0.8537 | | 0.2537 | 21.28 | 2000 | 0.3457 | 0.8615 | 0.8617 | | 0.2524 | 23.4 | 2200 | 0.3281 | 0.8683 | 0.8684 | | 0.241 | 25.53 | 2400 | 0.3780 | 0.8463 | 0.8464 | | 0.2465 | 27.66 | 2600 | 0.3381 | 0.8608 | 0.8611 | | 0.2397 | 29.79 | 2800 | 0.3359 | 0.8682 | 0.8684 | | 0.2367 | 31.91 | 3000 | 0.3365 | 0.8696 | 0.8697 | | 0.2323 | 34.04 | 3200 | 0.3274 | 0.8743 | 0.8744 | | 0.2315 | 36.17 | 3400 | 0.3487 | 0.8635 | 0.8637 | | 0.228 | 38.3 | 3600 | 0.3534 | 0.8635 | 0.8637 | | 0.2271 | 40.43 | 3800 | 0.3564 | 0.8640 | 0.8644 | | 0.2244 | 42.55 | 4000 | 0.3537 | 0.8608 | 0.8611 | | 0.221 | 44.68 | 4200 | 0.3461 | 0.8676 | 0.8677 | | 0.2205 | 46.81 | 4400 | 0.3504 | 0.8615 | 0.8617 | | 0.2163 | 48.94 | 4600 | 0.3609 | 0.8586 | 0.8591 | | 0.217 | 51.06 | 4800 | 0.3217 | 0.8784 | 0.8784 | | 0.2146 | 53.19 | 5000 | 0.3550 | 0.8640 | 0.8644 | | 0.2155 | 55.32 | 5200 | 0.3291 | 0.8730 | 0.8731 | | 0.2103 | 57.45 | 5400 | 0.3674 | 0.8662 | 0.8664 | | 0.2057 | 59.57 | 5600 | 0.3479 | 0.8744 | 0.8744 | | 0.2108 | 61.7 | 5800 | 0.3268 | 0.8744 | 0.8744 | | 0.2054 | 63.83 | 6000 | 0.3677 | 0.8674 | 0.8677 | | 0.2057 | 65.96 | 6200 | 0.3632 | 0.8668 | 0.8671 | | 0.2051 | 68.09 | 6400 | 0.3511 | 0.8722 | 0.8724 | | 0.2032 | 70.21 | 6600 | 0.3648 | 0.8688 | 0.8691 | | 0.2031 | 72.34 | 6800 | 0.3417 | 0.8730 | 0.8731 | | 0.1995 | 74.47 | 7000 | 0.3788 | 0.8626 | 0.8631 | | 0.195 | 76.6 | 7200 | 0.3478 | 0.8743 | 0.8744 | | 0.2002 | 78.72 | 7400 | 0.3553 | 0.8723 | 0.8724 | | 0.1986 | 80.85 | 7600 | 0.3591 | 0.8710 | 0.8711 | | 0.1954 | 82.98 | 7800 | 0.3469 | 0.8757 | 0.8758 | | 0.1976 | 85.11 | 8000 | 0.3576 | 0.8716 | 0.8717 | | 0.1959 | 87.23 | 8200 | 0.3583 | 0.8723 | 0.8724 | | 0.1972 | 89.36 | 8400 | 0.3552 | 0.8763 | 0.8764 | | 0.1954 | 91.49 | 8600 | 0.3648 | 0.8702 | 0.8704 | | 0.1937 | 93.62 | 8800 | 0.3511 | 0.8730 | 0.8731 | | 0.1933 | 95.74 | 9000 | 0.3704 | 0.8662 | 0.8664 | | 0.1914 | 97.87 | 9200 | 0.3564 | 0.8729 | 0.8731 | | 0.195 | 100.0 | 9400 | 0.3591 | 0.8723 | 0.8724 | | 0.1923 | 102.13 | 9600 | 0.3608 | 0.8723 | 0.8724 | | 0.1919 | 104.26 | 9800 | 0.3586 | 0.8730 | 0.8731 | | 0.1924 | 106.38 | 10000 | 0.3575 | 0.8736 | 0.8737 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:52:49+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.3045 - F1 Score: 0.8824 - Accuracy: 0.8824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4046 | 2.13 | 200 | 0.3698 | 0.8461 | 0.8464 | | 0.3108 | 4.26 | 400 | 0.3428 | 0.8563 | 0.8564 | | 0.2721 | 6.38 | 600 | 0.3552 | 0.8549 | 0.8550 | | 0.2588 | 8.51 | 800 | 0.3115 | 0.8724 | 0.8724 | | 0.2456 | 10.64 | 1000 | 0.3570 | 0.8559 | 0.8564 | | 0.2343 | 12.77 | 1200 | 0.3222 | 0.8771 | 0.8771 | | 0.2271 | 14.89 | 1400 | 0.3434 | 0.8655 | 0.8657 | | 0.2169 | 17.02 | 1600 | 0.3267 | 0.8831 | 0.8831 | | 0.2137 | 19.15 | 1800 | 0.3258 | 0.8778 | 0.8778 | | 0.2015 | 21.28 | 2000 | 0.3579 | 0.8688 | 0.8691 | | 0.2021 | 23.4 | 2200 | 0.3488 | 0.8769 | 0.8771 | | 0.1873 | 25.53 | 2400 | 0.3769 | 0.8715 | 0.8717 | | 0.1908 | 27.66 | 2600 | 0.3619 | 0.8674 | 0.8677 | | 0.1793 | 29.79 | 2800 | 0.3864 | 0.8706 | 0.8711 | | 0.1767 | 31.91 | 3000 | 0.3573 | 0.8797 | 0.8798 | | 0.171 | 34.04 | 3200 | 0.3449 | 0.8811 | 0.8811 | | 0.1678 | 36.17 | 3400 | 0.4275 | 0.8617 | 0.8624 | | 0.1595 | 38.3 | 3600 | 0.4030 | 0.8701 | 0.8704 | | 0.1558 | 40.43 | 3800 | 0.4725 | 0.8547 | 0.8557 | | 0.1512 | 42.55 | 4000 | 0.4683 | 0.8578 | 0.8584 | | 0.1473 | 44.68 | 4200 | 0.4366 | 0.8620 | 0.8624 | | 0.1421 | 46.81 | 4400 | 0.4197 | 0.8708 | 0.8711 | | 0.1394 | 48.94 | 4600 | 0.4501 | 0.8598 | 0.8604 | | 0.1374 | 51.06 | 4800 | 0.4113 | 0.8749 | 0.8751 | | 0.1323 | 53.19 | 5000 | 0.4698 | 0.8654 | 0.8657 | | 0.1287 | 55.32 | 5200 | 0.4620 | 0.8648 | 0.8651 | | 0.1272 | 57.45 | 5400 | 0.5108 | 0.8611 | 0.8617 | | 0.119 | 59.57 | 5600 | 0.5212 | 0.8606 | 0.8611 | | 0.1202 | 61.7 | 5800 | 0.4716 | 0.8694 | 0.8697 | | 0.1156 | 63.83 | 6000 | 0.5120 | 0.8605 | 0.8611 | | 0.1118 | 65.96 | 6200 | 0.5179 | 0.8619 | 0.8624 | | 0.1127 | 68.09 | 6400 | 0.5186 | 0.8571 | 0.8577 | | 0.1044 | 70.21 | 6600 | 0.6003 | 0.8523 | 0.8530 | | 0.1059 | 72.34 | 6800 | 0.5264 | 0.8626 | 0.8631 | | 0.1045 | 74.47 | 7000 | 0.5904 | 0.8529 | 0.8537 | | 0.0996 | 76.6 | 7200 | 0.5376 | 0.8660 | 0.8664 | | 0.0991 | 78.72 | 7400 | 0.5570 | 0.8646 | 0.8651 | | 0.0966 | 80.85 | 7600 | 0.5589 | 0.8646 | 0.8651 | | 0.0975 | 82.98 | 7800 | 0.5842 | 0.8619 | 0.8624 | | 0.0927 | 85.11 | 8000 | 0.6082 | 0.8584 | 0.8591 | | 0.0912 | 87.23 | 8200 | 0.6212 | 0.8598 | 0.8604 | | 0.0952 | 89.36 | 8400 | 0.6192 | 0.8543 | 0.8550 | | 0.09 | 91.49 | 8600 | 0.6004 | 0.8598 | 0.8604 | | 0.0891 | 93.62 | 8800 | 0.6050 | 0.8626 | 0.8631 | | 0.0882 | 95.74 | 9000 | 0.6315 | 0.8584 | 0.8591 | | 0.0857 | 97.87 | 9200 | 0.6263 | 0.8578 | 0.8584 | | 0.0872 | 100.0 | 9400 | 0.6448 | 0.8550 | 0.8557 | | 0.0849 | 102.13 | 9600 | 0.6521 | 0.8543 | 0.8550 | | 0.0834 | 104.26 | 9800 | 0.6395 | 0.8577 | 0.8584 | | 0.0853 | 106.38 | 10000 | 0.6370 | 0.8570 | 0.8577 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:53:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.2646 - F1 Score: 0.8951 - Accuracy: 0.8951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3885 | 2.13 | 200 | 0.3562 | 0.8477 | 0.8477 | | 0.2813 | 4.26 | 400 | 0.3305 | 0.8675 | 0.8677 | | 0.2493 | 6.38 | 600 | 0.3649 | 0.8522 | 0.8524 | | 0.2349 | 8.51 | 800 | 0.3031 | 0.8838 | 0.8838 | | 0.2193 | 10.64 | 1000 | 0.3812 | 0.8577 | 0.8584 | | 0.2032 | 12.77 | 1200 | 0.3416 | 0.8764 | 0.8764 | | 0.1925 | 14.89 | 1400 | 0.3750 | 0.8708 | 0.8711 | | 0.1779 | 17.02 | 1600 | 0.3903 | 0.8597 | 0.8597 | | 0.1674 | 19.15 | 1800 | 0.3564 | 0.8724 | 0.8724 | | 0.1489 | 21.28 | 2000 | 0.4619 | 0.8612 | 0.8617 | | 0.1423 | 23.4 | 2200 | 0.4485 | 0.8735 | 0.8737 | | 0.1215 | 25.53 | 2400 | 0.4759 | 0.8784 | 0.8784 | | 0.1185 | 27.66 | 2600 | 0.5499 | 0.8436 | 0.8444 | | 0.0993 | 29.79 | 2800 | 0.5338 | 0.8520 | 0.8524 | | 0.0962 | 31.91 | 3000 | 0.5457 | 0.8514 | 0.8517 | | 0.0823 | 34.04 | 3200 | 0.5406 | 0.8577 | 0.8577 | | 0.0787 | 36.17 | 3400 | 0.6370 | 0.8559 | 0.8564 | | 0.0708 | 38.3 | 3600 | 0.6247 | 0.8574 | 0.8577 | | 0.0674 | 40.43 | 3800 | 0.6834 | 0.8478 | 0.8484 | | 0.057 | 42.55 | 4000 | 0.8145 | 0.8462 | 0.8470 | | 0.0536 | 44.68 | 4200 | 0.7901 | 0.8400 | 0.8410 | | 0.0505 | 46.81 | 4400 | 0.7505 | 0.8659 | 0.8664 | | 0.0463 | 48.94 | 4600 | 0.7752 | 0.8490 | 0.8497 | | 0.0449 | 51.06 | 4800 | 0.7215 | 0.8601 | 0.8604 | | 0.0384 | 53.19 | 5000 | 0.8821 | 0.8376 | 0.8383 | | 0.0351 | 55.32 | 5200 | 0.9139 | 0.8465 | 0.8470 | | 0.0349 | 57.45 | 5400 | 0.9360 | 0.8387 | 0.8397 | | 0.0361 | 59.57 | 5600 | 0.8710 | 0.8575 | 0.8577 | | 0.0308 | 61.7 | 5800 | 0.8229 | 0.8597 | 0.8597 | | 0.0294 | 63.83 | 6000 | 0.9199 | 0.8517 | 0.8524 | | 0.0293 | 65.96 | 6200 | 0.8718 | 0.8588 | 0.8591 | | 0.0271 | 68.09 | 6400 | 0.8787 | 0.8617 | 0.8617 | | 0.0238 | 70.21 | 6600 | 0.9513 | 0.8581 | 0.8584 | | 0.0241 | 72.34 | 6800 | 0.9352 | 0.8629 | 0.8631 | | 0.0225 | 74.47 | 7000 | 0.9943 | 0.8548 | 0.8550 | | 0.0231 | 76.6 | 7200 | 0.9241 | 0.8602 | 0.8604 | | 0.0204 | 78.72 | 7400 | 1.0017 | 0.8622 | 0.8624 | | 0.0206 | 80.85 | 7600 | 1.0763 | 0.8498 | 0.8504 | | 0.0182 | 82.98 | 7800 | 1.0418 | 0.8575 | 0.8577 | | 0.0166 | 85.11 | 8000 | 1.0393 | 0.8567 | 0.8570 | | 0.0172 | 87.23 | 8200 | 1.0861 | 0.8492 | 0.8497 | | 0.0167 | 89.36 | 8400 | 1.1617 | 0.8470 | 0.8477 | | 0.015 | 91.49 | 8600 | 1.0801 | 0.8621 | 0.8624 | | 0.0151 | 93.62 | 8800 | 1.1022 | 0.8541 | 0.8544 | | 0.014 | 95.74 | 9000 | 1.1847 | 0.8438 | 0.8444 | | 0.0125 | 97.87 | 9200 | 1.1438 | 0.8534 | 0.8537 | | 0.0131 | 100.0 | 9400 | 1.1487 | 0.8554 | 0.8557 | | 0.0121 | 102.13 | 9600 | 1.1538 | 0.8533 | 0.8537 | | 0.0124 | 104.26 | 9800 | 1.1753 | 0.8513 | 0.8517 | | 0.0121 | 106.38 | 10000 | 1.1525 | 0.8501 | 0.8504 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:53:32+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T01:53:50+00:00
null
null
{}
sbhussain/toekn
null
[ "region:us" ]
null
2024-04-30T01:53:54+00:00
null
null
{}
sdinger/whisper-large-v2-dv
null
[ "region:us" ]
null
2024-04-30T01:56:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5400 - F1 Score: 0.7389 - Accuracy: 0.7387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6113 | 0.93 | 200 | 0.5680 | 0.7095 | 0.7100 | | 0.572 | 1.87 | 400 | 0.5594 | 0.7164 | 0.7164 | | 0.5552 | 2.8 | 600 | 0.5536 | 0.7272 | 0.7273 | | 0.5502 | 3.74 | 800 | 0.5479 | 0.7287 | 0.7284 | | 0.5447 | 4.67 | 1000 | 0.5498 | 0.7288 | 0.7287 | | 0.5353 | 5.61 | 1200 | 0.5618 | 0.7185 | 0.7205 | | 0.5363 | 6.54 | 1400 | 0.5655 | 0.7144 | 0.7170 | | 0.5237 | 7.48 | 1600 | 0.5516 | 0.7353 | 0.7355 | | 0.533 | 8.41 | 1800 | 0.5478 | 0.7296 | 0.7299 | | 0.5298 | 9.35 | 2000 | 0.5565 | 0.7226 | 0.7238 | | 0.5184 | 10.28 | 2200 | 0.5374 | 0.7390 | 0.7387 | | 0.5243 | 11.21 | 2400 | 0.5541 | 0.7308 | 0.7317 | | 0.5154 | 12.15 | 2600 | 0.5691 | 0.7251 | 0.7270 | | 0.5176 | 13.08 | 2800 | 0.5562 | 0.7323 | 0.7331 | | 0.519 | 14.02 | 3000 | 0.5338 | 0.7395 | 0.7393 | | 0.5141 | 14.95 | 3200 | 0.5441 | 0.7395 | 0.7396 | | 0.511 | 15.89 | 3400 | 0.5451 | 0.7396 | 0.7399 | | 0.5109 | 16.82 | 3600 | 0.5474 | 0.7370 | 0.7375 | | 0.5124 | 17.76 | 3800 | 0.5658 | 0.7261 | 0.7282 | | 0.51 | 18.69 | 4000 | 0.5441 | 0.7386 | 0.7387 | | 0.5065 | 19.63 | 4200 | 0.5371 | 0.7436 | 0.7437 | | 0.5079 | 20.56 | 4400 | 0.5356 | 0.7442 | 0.7443 | | 0.5038 | 21.5 | 4600 | 0.5512 | 0.7350 | 0.7361 | | 0.5053 | 22.43 | 4800 | 0.5326 | 0.7442 | 0.7440 | | 0.5014 | 23.36 | 5000 | 0.5475 | 0.7416 | 0.7422 | | 0.5036 | 24.3 | 5200 | 0.5289 | 0.7474 | 0.7472 | | 0.503 | 25.23 | 5400 | 0.5268 | 0.7440 | 0.7437 | | 0.503 | 26.17 | 5600 | 0.5320 | 0.7409 | 0.7408 | | 0.5008 | 27.1 | 5800 | 0.5317 | 0.7413 | 0.7411 | | 0.4931 | 28.04 | 6000 | 0.5367 | 0.7431 | 0.7428 | | 0.501 | 28.97 | 6200 | 0.5425 | 0.7423 | 0.7425 | | 0.4986 | 29.91 | 6400 | 0.5394 | 0.7416 | 0.7416 | | 0.4991 | 30.84 | 6600 | 0.5435 | 0.7396 | 0.7402 | | 0.4947 | 31.78 | 6800 | 0.5304 | 0.7430 | 0.7428 | | 0.4952 | 32.71 | 7000 | 0.5355 | 0.7411 | 0.7411 | | 0.492 | 33.64 | 7200 | 0.5465 | 0.7395 | 0.7402 | | 0.4942 | 34.58 | 7400 | 0.5327 | 0.7427 | 0.7425 | | 0.4941 | 35.51 | 7600 | 0.5377 | 0.7401 | 0.7402 | | 0.4893 | 36.45 | 7800 | 0.5352 | 0.7436 | 0.7434 | | 0.4958 | 37.38 | 8000 | 0.5437 | 0.7408 | 0.7413 | | 0.4902 | 38.32 | 8200 | 0.5360 | 0.7425 | 0.7425 | | 0.4922 | 39.25 | 8400 | 0.5329 | 0.7429 | 0.7428 | | 0.4945 | 40.19 | 8600 | 0.5353 | 0.7409 | 0.7408 | | 0.4909 | 41.12 | 8800 | 0.5414 | 0.7419 | 0.7422 | | 0.4882 | 42.06 | 9000 | 0.5362 | 0.7408 | 0.7408 | | 0.4898 | 42.99 | 9200 | 0.5449 | 0.7430 | 0.7434 | | 0.4889 | 43.93 | 9400 | 0.5376 | 0.7427 | 0.7428 | | 0.4879 | 44.86 | 9600 | 0.5355 | 0.7416 | 0.7416 | | 0.4867 | 45.79 | 9800 | 0.5374 | 0.7424 | 0.7425 | | 0.4924 | 46.73 | 10000 | 0.5380 | 0.7433 | 0.7434 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T01:59:11+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0
{"library_name": "peft", "base_model": "Mistral-7B-Instruct-v0.2"}
NandGate1110/mistral-7b-bakery
null
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-30T01:59:16+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5424 - F1 Score: 0.7384 - Accuracy: 0.7381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.598 | 0.93 | 200 | 0.5625 | 0.7171 | 0.7179 | | 0.5513 | 1.87 | 400 | 0.5641 | 0.7209 | 0.7229 | | 0.536 | 2.8 | 600 | 0.5415 | 0.7383 | 0.7381 | | 0.5305 | 3.74 | 800 | 0.5354 | 0.7355 | 0.7352 | | 0.526 | 4.67 | 1000 | 0.5325 | 0.7405 | 0.7402 | | 0.514 | 5.61 | 1200 | 0.5421 | 0.7363 | 0.7370 | | 0.5144 | 6.54 | 1400 | 0.5380 | 0.7371 | 0.7375 | | 0.4999 | 7.48 | 1600 | 0.5358 | 0.7453 | 0.7452 | | 0.5078 | 8.41 | 1800 | 0.5257 | 0.7483 | 0.7481 | | 0.5022 | 9.35 | 2000 | 0.5268 | 0.7487 | 0.7484 | | 0.4926 | 10.28 | 2200 | 0.5264 | 0.7454 | 0.7452 | | 0.4939 | 11.21 | 2400 | 0.5519 | 0.7339 | 0.7355 | | 0.4868 | 12.15 | 2600 | 0.5432 | 0.7401 | 0.7408 | | 0.4841 | 13.08 | 2800 | 0.5397 | 0.7461 | 0.7460 | | 0.4847 | 14.02 | 3000 | 0.5271 | 0.7430 | 0.7431 | | 0.4782 | 14.95 | 3200 | 0.5273 | 0.7484 | 0.7481 | | 0.4763 | 15.89 | 3400 | 0.5244 | 0.7534 | 0.7531 | | 0.4726 | 16.82 | 3600 | 0.5343 | 0.7436 | 0.7437 | | 0.474 | 17.76 | 3800 | 0.5673 | 0.7270 | 0.7296 | | 0.4703 | 18.69 | 4000 | 0.5288 | 0.7443 | 0.7440 | | 0.4653 | 19.63 | 4200 | 0.5236 | 0.7454 | 0.7452 | | 0.4639 | 20.56 | 4400 | 0.5356 | 0.7444 | 0.7443 | | 0.4622 | 21.5 | 4600 | 0.5348 | 0.7427 | 0.7431 | | 0.4596 | 22.43 | 4800 | 0.5321 | 0.7449 | 0.7446 | | 0.4561 | 23.36 | 5000 | 0.5373 | 0.7439 | 0.7437 | | 0.458 | 24.3 | 5200 | 0.5286 | 0.7464 | 0.7463 | | 0.454 | 25.23 | 5400 | 0.5276 | 0.7507 | 0.7504 | | 0.4527 | 26.17 | 5600 | 0.5275 | 0.7454 | 0.7452 | | 0.4511 | 27.1 | 5800 | 0.5334 | 0.7457 | 0.7455 | | 0.4405 | 28.04 | 6000 | 0.5433 | 0.7466 | 0.7463 | | 0.4505 | 28.97 | 6200 | 0.5300 | 0.7490 | 0.7487 | | 0.4461 | 29.91 | 6400 | 0.5396 | 0.7477 | 0.7475 | | 0.4465 | 30.84 | 6600 | 0.5380 | 0.7435 | 0.7437 | | 0.4421 | 31.78 | 6800 | 0.5272 | 0.7466 | 0.7463 | | 0.4398 | 32.71 | 7000 | 0.5429 | 0.7438 | 0.7437 | | 0.4378 | 33.64 | 7200 | 0.5481 | 0.7425 | 0.7428 | | 0.4374 | 34.58 | 7400 | 0.5395 | 0.7477 | 0.7475 | | 0.433 | 35.51 | 7600 | 0.5425 | 0.7427 | 0.7425 | | 0.4309 | 36.45 | 7800 | 0.5489 | 0.7467 | 0.7466 | | 0.4355 | 37.38 | 8000 | 0.5436 | 0.7482 | 0.7481 | | 0.4284 | 38.32 | 8200 | 0.5459 | 0.7502 | 0.7501 | | 0.4317 | 39.25 | 8400 | 0.5448 | 0.7428 | 0.7425 | | 0.4327 | 40.19 | 8600 | 0.5481 | 0.7469 | 0.7466 | | 0.4287 | 41.12 | 8800 | 0.5515 | 0.7480 | 0.7481 | | 0.4256 | 42.06 | 9000 | 0.5487 | 0.7515 | 0.7513 | | 0.427 | 42.99 | 9200 | 0.5510 | 0.7469 | 0.7469 | | 0.425 | 43.93 | 9400 | 0.5452 | 0.7495 | 0.7493 | | 0.4242 | 44.86 | 9600 | 0.5466 | 0.7498 | 0.7496 | | 0.4253 | 45.79 | 9800 | 0.5469 | 0.7500 | 0.7499 | | 0.4268 | 46.73 | 10000 | 0.5457 | 0.7500 | 0.7499 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:00:22+00:00