Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-dpo-full-sft-wo-healthsearch_qa This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-healthsearch_qa-sft](https://huggingface.co/Minbyul/biomistral-7b-wo-healthsearch_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6929 - Rewards/chosen: 0.0003 - Rewards/rejected: -0.0003 - Rewards/accuracies: 0.5394 - Rewards/margins: 0.0007 - Logps/rejected: -1184.0101 - Logps/chosen: -767.6729 - Logits/rejected: -3.1682 - Logits/chosen: -3.2170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/biomistral-7b-wo-healthsearch_qa-sft", "model-index": [{"name": "biomistral-7b-dpo-full-sft-wo-healthsearch_qa", "results": []}]}
Minbyul/biomistral-7b-dpo-full-sft-wo-healthsearch_qa
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/biomistral-7b-wo-healthsearch_qa-sft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:01:03+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tduch/gemma-7b-it-adapters-alex-street
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:01:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5443 - F1 Score: 0.7415 - Accuracy: 0.7413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5865 | 0.93 | 200 | 0.5616 | 0.7236 | 0.7246 | | 0.5409 | 1.87 | 400 | 0.5523 | 0.7232 | 0.7246 | | 0.5243 | 2.8 | 600 | 0.5323 | 0.7402 | 0.7399 | | 0.5133 | 3.74 | 800 | 0.5351 | 0.7370 | 0.7372 | | 0.5091 | 4.67 | 1000 | 0.5197 | 0.7474 | 0.7472 | | 0.4952 | 5.61 | 1200 | 0.5306 | 0.7454 | 0.7455 | | 0.4908 | 6.54 | 1400 | 0.5291 | 0.7436 | 0.7437 | | 0.4748 | 7.48 | 1600 | 0.5288 | 0.7397 | 0.7396 | | 0.4777 | 8.41 | 1800 | 0.5187 | 0.7454 | 0.7452 | | 0.4683 | 9.35 | 2000 | 0.5285 | 0.7318 | 0.7328 | | 0.4579 | 10.28 | 2200 | 0.5254 | 0.7501 | 0.7499 | | 0.4525 | 11.21 | 2400 | 0.5367 | 0.7453 | 0.7452 | | 0.4419 | 12.15 | 2600 | 0.5284 | 0.7412 | 0.7416 | | 0.4354 | 13.08 | 2800 | 0.5425 | 0.7490 | 0.7487 | | 0.4326 | 14.02 | 3000 | 0.5501 | 0.7409 | 0.7413 | | 0.425 | 14.95 | 3200 | 0.5560 | 0.7504 | 0.7501 | | 0.4155 | 15.89 | 3400 | 0.5385 | 0.7507 | 0.7504 | | 0.4054 | 16.82 | 3600 | 0.5621 | 0.7375 | 0.7372 | | 0.4034 | 17.76 | 3800 | 0.6042 | 0.7287 | 0.7314 | | 0.3951 | 18.69 | 4000 | 0.5603 | 0.7334 | 0.7334 | | 0.3892 | 19.63 | 4200 | 0.5567 | 0.7455 | 0.7452 | | 0.38 | 20.56 | 4400 | 0.5779 | 0.7408 | 0.7405 | | 0.376 | 21.5 | 4600 | 0.5861 | 0.7414 | 0.7413 | | 0.3681 | 22.43 | 4800 | 0.5816 | 0.7367 | 0.7364 | | 0.3586 | 23.36 | 5000 | 0.6062 | 0.7376 | 0.7378 | | 0.3575 | 24.3 | 5200 | 0.5973 | 0.7431 | 0.7428 | | 0.3537 | 25.23 | 5400 | 0.5922 | 0.7384 | 0.7381 | | 0.3443 | 26.17 | 5600 | 0.5948 | 0.7375 | 0.7372 | | 0.341 | 27.1 | 5800 | 0.6103 | 0.7323 | 0.7323 | | 0.3265 | 28.04 | 6000 | 0.6109 | 0.7393 | 0.7390 | | 0.3317 | 28.97 | 6200 | 0.6055 | 0.7329 | 0.7326 | | 0.3274 | 29.91 | 6400 | 0.6146 | 0.7270 | 0.7267 | | 0.3222 | 30.84 | 6600 | 0.6171 | 0.7323 | 0.7320 | | 0.3159 | 31.78 | 6800 | 0.5983 | 0.7299 | 0.7296 | | 0.3057 | 32.71 | 7000 | 0.6538 | 0.7258 | 0.7255 | | 0.3081 | 33.64 | 7200 | 0.6444 | 0.7245 | 0.7243 | | 0.3031 | 34.58 | 7400 | 0.6478 | 0.7320 | 0.7317 | | 0.299 | 35.51 | 7600 | 0.6399 | 0.7263 | 0.7261 | | 0.2883 | 36.45 | 7800 | 0.6671 | 0.7349 | 0.7346 | | 0.2941 | 37.38 | 8000 | 0.6549 | 0.7273 | 0.7270 | | 0.2869 | 38.32 | 8200 | 0.6615 | 0.7320 | 0.7317 | | 0.2848 | 39.25 | 8400 | 0.6594 | 0.7293 | 0.7290 | | 0.2852 | 40.19 | 8600 | 0.6697 | 0.7323 | 0.7320 | | 0.2811 | 41.12 | 8800 | 0.6715 | 0.7291 | 0.7287 | | 0.2754 | 42.06 | 9000 | 0.6837 | 0.7296 | 0.7293 | | 0.278 | 42.99 | 9200 | 0.6753 | 0.7314 | 0.7311 | | 0.2715 | 43.93 | 9400 | 0.6735 | 0.7257 | 0.7255 | | 0.2657 | 44.86 | 9600 | 0.6834 | 0.7284 | 0.7282 | | 0.2685 | 45.79 | 9800 | 0.6874 | 0.7296 | 0.7293 | | 0.2717 | 46.73 | 10000 | 0.6834 | 0.7284 | 0.7282 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:02:06+00:00
null
null
{"license": "openrail"}
Coolwowsocoolwow/Baldi
null
[ "license:openrail", "region:us" ]
null
2024-04-30T02:02:42+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA11 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6441 | 0.09 | 10 | 0.2795 | | 0.1859 | 0.18 | 20 | 0.1565 | | 0.1509 | 0.27 | 30 | 0.1657 | | 0.1581 | 0.36 | 40 | 0.1509 | | 0.1497 | 0.45 | 50 | 0.1504 | | 0.1514 | 0.54 | 60 | 0.1496 | | 0.1497 | 0.63 | 70 | 0.1472 | | 0.1487 | 0.73 | 80 | 0.1529 | | 0.1465 | 0.82 | 90 | 0.1488 | | 0.149 | 0.91 | 100 | 0.1478 | | 0.1511 | 1.0 | 110 | 0.1483 | | 0.1438 | 1.09 | 120 | 0.1352 | | 0.1382 | 1.18 | 130 | 0.1203 | | 0.59 | 1.27 | 140 | 2.9085 | | 0.602 | 1.36 | 150 | 1.5195 | | 6.4792 | 1.45 | 160 | 5.2383 | | 2.3451 | 1.54 | 170 | 0.7049 | | 1.0846 | 1.63 | 180 | 0.6462 | | 0.5224 | 1.72 | 190 | 0.3806 | | 0.3875 | 1.81 | 200 | 0.2835 | | 0.2533 | 1.9 | 210 | 0.2670 | | 0.2265 | 1.99 | 220 | 0.2117 | | 0.1544 | 2.08 | 230 | 0.1180 | | 0.1085 | 2.18 | 240 | 0.0898 | | 0.0812 | 2.27 | 250 | 0.0735 | | 0.0721 | 2.36 | 260 | 0.0757 | | 0.0719 | 2.45 | 270 | 0.0617 | | 0.0545 | 2.54 | 280 | 0.0565 | | 0.0479 | 2.63 | 290 | 0.0479 | | 0.0458 | 2.72 | 300 | 0.0403 | | 0.0316 | 2.81 | 310 | 0.0371 | | 0.0298 | 2.9 | 320 | 0.0362 | | 0.0346 | 2.99 | 330 | 0.0353 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA11", "results": []}]}
Litzy619/O0428HMA11
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:02:44+00:00
null
null
{}
Qiyp/mimc_rope
null
[ "tensorboard", "region:us" ]
null
2024-04-30T02:03:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/w2vxdwf
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:03:29+00:00
null
null
{}
mozksoft/flat2DAnimerge-v40-coreml-q6
null
[ "region:us" ]
null
2024-04-30T02:03:37+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
Charishma27/sft_mistral_709_steps_3_apple_sampled_epoch
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-30T02:05:03+00:00
null
transformers
# Uploaded model - **Developed by:** dmorrigan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
dmorrigan/HebrewLyricsLoRA-40K-5Epoch
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:06:01+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/nlfv3uy
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:06:39+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial-sentiment-model-1000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7525 - Accuracy: 0.7 - F1: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "financial-sentiment-model-1000-samples", "results": []}]}
kevinwlip/financial-sentiment-model-1000-samples
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:07:45+00:00
text-generation
transformers
{}
w32zhong/s3d-phi3_fft_layer526
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:08:00+00:00
summarization
transformers
# indobart-small This model is a fine-tuned version of [bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on [Liputan6](https://paperswithcode.com/dataset/liputan6) dataset. See demo model here [notebook](https://colab.research.google.com/drive/1bcqS42M3e5IySPYtAa-S4UeyJczg9DXh?usp=sharing). ## Training procedure ### Training hyperparameters - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | R1 Precision | R1 Recall | R1 Fmeasure | R2 Precision | R2 Recall | R2 Fmeasure | Rl Precision | Rl Recall | Rl Fmeasure | |:-------------:|:-----:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:| | 0.3064 | 1.0 | 0.3487 | 0.6043 | 0.4375 | 0.1318 | 0.2613 | 0.1723 | 0.3349 | 0.5833 | 0.4208 | ## Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1 ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # Load model and tokenizer model = AutoModelForSeq2SeqLM.from_pretrained("gaduhhartawan/indobart-base") tokenizer = AutoTokenizer.from_pretrained("gaduhhartawan/indobart-base") # Input article for summarization ARTICLE_TO_SUMMARIZE = "lorem ipsum..." # Generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=30, max_length=150, num_beams=2, repetition_penalty=2.0, length_penalty=0.8, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) # Decode the summary summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print("Summary: ", summary_text) ```
{"language": ["id"], "license": "mit", "tags": ["bart"], "datasets": ["id_liputan6"], "metrics": ["rouge"], "pipeline_tag": "summarization"}
gaduhhartawan/indobart-base
null
[ "transformers", "safetensors", "bart", "text2text-generation", "summarization", "id", "dataset:id_liputan6", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:08:33+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.2-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:08:48+00:00
null
null
{}
karlitoxz/ServiModel
null
[ "region:us" ]
null
2024-04-30T02:09:22+00:00
null
null
{}
DUAL-GPO-2/zephyr-7b-gpo-log1-i1
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-30T02:09:54+00:00
null
null
{}
Chris8055/bert-finetuned-ner
null
[ "region:us" ]
null
2024-04-30T02:10:02+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4296 - F1 Score: 0.8218 - Accuracy: 0.8225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4925 | 1.1 | 200 | 0.4580 | 0.8028 | 0.8027 | | 0.4557 | 2.21 | 400 | 0.4510 | 0.8060 | 0.8072 | | 0.4499 | 3.31 | 600 | 0.4464 | 0.8042 | 0.8058 | | 0.4395 | 4.42 | 800 | 0.4424 | 0.8066 | 0.8079 | | 0.4391 | 5.52 | 1000 | 0.4490 | 0.8007 | 0.8027 | | 0.4291 | 6.63 | 1200 | 0.4541 | 0.7968 | 0.7992 | | 0.4313 | 7.73 | 1400 | 0.4366 | 0.8060 | 0.8072 | | 0.4228 | 8.84 | 1600 | 0.4589 | 0.7947 | 0.7979 | | 0.4228 | 9.94 | 1800 | 0.4297 | 0.8143 | 0.8145 | | 0.4193 | 11.05 | 2000 | 0.4448 | 0.8044 | 0.8058 | | 0.4188 | 12.15 | 2200 | 0.4314 | 0.8130 | 0.8135 | | 0.4139 | 13.26 | 2400 | 0.4306 | 0.8092 | 0.8100 | | 0.415 | 14.36 | 2600 | 0.4272 | 0.8132 | 0.8138 | | 0.4126 | 15.47 | 2800 | 0.4396 | 0.8075 | 0.8089 | | 0.4105 | 16.57 | 3000 | 0.4327 | 0.8148 | 0.8148 | | 0.4098 | 17.68 | 3200 | 0.4307 | 0.8124 | 0.8131 | | 0.405 | 18.78 | 3400 | 0.4389 | 0.8098 | 0.8110 | | 0.4054 | 19.89 | 3600 | 0.4358 | 0.8099 | 0.8110 | | 0.4054 | 20.99 | 3800 | 0.4408 | 0.8114 | 0.8124 | | 0.4032 | 22.1 | 4000 | 0.4319 | 0.8084 | 0.8096 | | 0.4011 | 23.2 | 4200 | 0.4315 | 0.8134 | 0.8141 | | 0.4006 | 24.31 | 4400 | 0.4423 | 0.8098 | 0.8114 | | 0.3961 | 25.41 | 4600 | 0.4382 | 0.8149 | 0.8159 | | 0.4012 | 26.52 | 4800 | 0.4318 | 0.8161 | 0.8169 | | 0.4009 | 27.62 | 5000 | 0.4319 | 0.8166 | 0.8176 | | 0.3955 | 28.73 | 5200 | 0.4295 | 0.8145 | 0.8155 | | 0.3934 | 29.83 | 5400 | 0.4325 | 0.8141 | 0.8148 | | 0.3945 | 30.94 | 5600 | 0.4320 | 0.8162 | 0.8169 | | 0.3929 | 32.04 | 5800 | 0.4342 | 0.8157 | 0.8162 | | 0.3925 | 33.15 | 6000 | 0.4293 | 0.8156 | 0.8166 | | 0.3931 | 34.25 | 6200 | 0.4330 | 0.8134 | 0.8141 | | 0.3883 | 35.36 | 6400 | 0.4372 | 0.8167 | 0.8176 | | 0.3917 | 36.46 | 6600 | 0.4272 | 0.8188 | 0.8193 | | 0.3895 | 37.57 | 6800 | 0.4318 | 0.8156 | 0.8166 | | 0.3889 | 38.67 | 7000 | 0.4313 | 0.8174 | 0.8183 | | 0.385 | 39.78 | 7200 | 0.4342 | 0.8164 | 0.8173 | | 0.3904 | 40.88 | 7400 | 0.4298 | 0.8154 | 0.8159 | | 0.3863 | 41.99 | 7600 | 0.4323 | 0.8161 | 0.8169 | | 0.3862 | 43.09 | 7800 | 0.4362 | 0.8164 | 0.8173 | | 0.3872 | 44.2 | 8000 | 0.4349 | 0.8151 | 0.8162 | | 0.3857 | 45.3 | 8200 | 0.4290 | 0.8170 | 0.8176 | | 0.382 | 46.41 | 8400 | 0.4305 | 0.8174 | 0.8180 | | 0.3883 | 47.51 | 8600 | 0.4331 | 0.8169 | 0.8180 | | 0.3808 | 48.62 | 8800 | 0.4348 | 0.8162 | 0.8173 | | 0.3836 | 49.72 | 9000 | 0.4346 | 0.8162 | 0.8173 | | 0.385 | 50.83 | 9200 | 0.4380 | 0.8141 | 0.8155 | | 0.3831 | 51.93 | 9400 | 0.4341 | 0.8155 | 0.8166 | | 0.3824 | 53.04 | 9600 | 0.4324 | 0.8171 | 0.8180 | | 0.3803 | 54.14 | 9800 | 0.4326 | 0.8161 | 0.8169 | | 0.382 | 55.25 | 10000 | 0.4344 | 0.8159 | 0.8169 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:10:08+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4254 - F1 Score: 0.8273 - Accuracy: 0.8277 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4821 | 1.1 | 200 | 0.4463 | 0.8110 | 0.8110 | | 0.4456 | 2.21 | 400 | 0.4363 | 0.8128 | 0.8135 | | 0.4377 | 3.31 | 600 | 0.4392 | 0.8029 | 0.8048 | | 0.4231 | 4.42 | 800 | 0.4418 | 0.8026 | 0.8041 | | 0.4221 | 5.52 | 1000 | 0.4398 | 0.8081 | 0.8100 | | 0.4099 | 6.63 | 1200 | 0.4558 | 0.8065 | 0.8089 | | 0.4116 | 7.73 | 1400 | 0.4356 | 0.8135 | 0.8152 | | 0.4011 | 8.84 | 1600 | 0.4595 | 0.8074 | 0.8103 | | 0.3996 | 9.94 | 1800 | 0.4245 | 0.8146 | 0.8152 | | 0.3953 | 11.05 | 2000 | 0.4438 | 0.8073 | 0.8079 | | 0.3926 | 12.15 | 2200 | 0.4207 | 0.8227 | 0.8232 | | 0.3855 | 13.26 | 2400 | 0.4189 | 0.8243 | 0.8249 | | 0.3876 | 14.36 | 2600 | 0.4192 | 0.8281 | 0.8284 | | 0.3807 | 15.47 | 2800 | 0.4265 | 0.8216 | 0.8225 | | 0.3775 | 16.57 | 3000 | 0.4232 | 0.8248 | 0.8249 | | 0.3745 | 17.68 | 3200 | 0.4212 | 0.8239 | 0.8245 | | 0.3687 | 18.78 | 3400 | 0.4597 | 0.8051 | 0.8083 | | 0.3681 | 19.89 | 3600 | 0.4259 | 0.8195 | 0.8207 | | 0.364 | 20.99 | 3800 | 0.4339 | 0.8158 | 0.8173 | | 0.3606 | 22.1 | 4000 | 0.4220 | 0.8201 | 0.8204 | | 0.3589 | 23.2 | 4200 | 0.4268 | 0.8186 | 0.8193 | | 0.3531 | 24.31 | 4400 | 0.4384 | 0.8144 | 0.8162 | | 0.3495 | 25.41 | 4600 | 0.4317 | 0.8262 | 0.8263 | | 0.3546 | 26.52 | 4800 | 0.4296 | 0.8186 | 0.8193 | | 0.3484 | 27.62 | 5000 | 0.4367 | 0.8198 | 0.8214 | | 0.3459 | 28.73 | 5200 | 0.4349 | 0.8184 | 0.8197 | | 0.3405 | 29.83 | 5400 | 0.4344 | 0.8154 | 0.8162 | | 0.3405 | 30.94 | 5600 | 0.4304 | 0.8230 | 0.8239 | | 0.3381 | 32.04 | 5800 | 0.4300 | 0.8195 | 0.8197 | | 0.3366 | 33.15 | 6000 | 0.4373 | 0.8240 | 0.8252 | | 0.335 | 34.25 | 6200 | 0.4381 | 0.8191 | 0.8193 | | 0.3281 | 35.36 | 6400 | 0.4550 | 0.8225 | 0.8235 | | 0.3323 | 36.46 | 6600 | 0.4338 | 0.8224 | 0.8232 | | 0.3295 | 37.57 | 6800 | 0.4406 | 0.8192 | 0.8204 | | 0.3261 | 38.67 | 7000 | 0.4415 | 0.8204 | 0.8214 | | 0.3243 | 39.78 | 7200 | 0.4425 | 0.8224 | 0.8235 | | 0.3262 | 40.88 | 7400 | 0.4315 | 0.8198 | 0.8200 | | 0.3232 | 41.99 | 7600 | 0.4392 | 0.8171 | 0.8183 | | 0.3241 | 43.09 | 7800 | 0.4418 | 0.8228 | 0.8235 | | 0.3202 | 44.2 | 8000 | 0.4426 | 0.8187 | 0.8197 | | 0.3201 | 45.3 | 8200 | 0.4383 | 0.8210 | 0.8214 | | 0.3166 | 46.41 | 8400 | 0.4383 | 0.8208 | 0.8214 | | 0.3186 | 47.51 | 8600 | 0.4454 | 0.8218 | 0.8228 | | 0.3102 | 48.62 | 8800 | 0.4445 | 0.8212 | 0.8221 | | 0.3143 | 49.72 | 9000 | 0.4470 | 0.8209 | 0.8218 | | 0.3164 | 50.83 | 9200 | 0.4476 | 0.8190 | 0.8204 | | 0.3113 | 51.93 | 9400 | 0.4463 | 0.8208 | 0.8218 | | 0.3099 | 53.04 | 9600 | 0.4432 | 0.8211 | 0.8218 | | 0.3081 | 54.14 | 9800 | 0.4443 | 0.8208 | 0.8214 | | 0.3096 | 55.25 | 10000 | 0.4462 | 0.8220 | 0.8228 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:12:00+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4267 - F1 Score: 0.8228 - Accuracy: 0.8232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4764 | 1.1 | 200 | 0.4380 | 0.8112 | 0.8114 | | 0.4381 | 2.21 | 400 | 0.4305 | 0.8106 | 0.8117 | | 0.4268 | 3.31 | 600 | 0.4299 | 0.8055 | 0.8065 | | 0.4129 | 4.42 | 800 | 0.4419 | 0.8070 | 0.8093 | | 0.4092 | 5.52 | 1000 | 0.4268 | 0.8149 | 0.8166 | | 0.3941 | 6.63 | 1200 | 0.4522 | 0.8068 | 0.8096 | | 0.3919 | 7.73 | 1400 | 0.4270 | 0.8182 | 0.8197 | | 0.3788 | 8.84 | 1600 | 0.4612 | 0.8045 | 0.8079 | | 0.3739 | 9.94 | 1800 | 0.4191 | 0.8281 | 0.8287 | | 0.3658 | 11.05 | 2000 | 0.4359 | 0.8158 | 0.8159 | | 0.3602 | 12.15 | 2200 | 0.4162 | 0.8307 | 0.8311 | | 0.3471 | 13.26 | 2400 | 0.4247 | 0.8229 | 0.8239 | | 0.3454 | 14.36 | 2600 | 0.4207 | 0.8289 | 0.8291 | | 0.3342 | 15.47 | 2800 | 0.4371 | 0.8172 | 0.8180 | | 0.3245 | 16.57 | 3000 | 0.4329 | 0.8222 | 0.8221 | | 0.3179 | 17.68 | 3200 | 0.4430 | 0.8146 | 0.8152 | | 0.3075 | 18.78 | 3400 | 0.4965 | 0.7971 | 0.8003 | | 0.3012 | 19.89 | 3600 | 0.4450 | 0.8216 | 0.8225 | | 0.2906 | 20.99 | 3800 | 0.4661 | 0.8151 | 0.8162 | | 0.2801 | 22.1 | 4000 | 0.4618 | 0.8218 | 0.8218 | | 0.2748 | 23.2 | 4200 | 0.4734 | 0.8115 | 0.8124 | | 0.2642 | 24.31 | 4400 | 0.5041 | 0.8032 | 0.8044 | | 0.2551 | 25.41 | 4600 | 0.5074 | 0.8081 | 0.8089 | | 0.2536 | 26.52 | 4800 | 0.5061 | 0.7931 | 0.7947 | | 0.2485 | 27.62 | 5000 | 0.5218 | 0.8000 | 0.8020 | | 0.2397 | 28.73 | 5200 | 0.4901 | 0.8071 | 0.8083 | | 0.2293 | 29.83 | 5400 | 0.5268 | 0.7981 | 0.7992 | | 0.2272 | 30.94 | 5600 | 0.5205 | 0.8129 | 0.8131 | | 0.218 | 32.04 | 5800 | 0.5089 | 0.8119 | 0.8121 | | 0.2167 | 33.15 | 6000 | 0.5431 | 0.8035 | 0.8044 | | 0.2099 | 34.25 | 6200 | 0.5419 | 0.8113 | 0.8114 | | 0.2042 | 35.36 | 6400 | 0.5599 | 0.8094 | 0.8100 | | 0.2014 | 36.46 | 6600 | 0.5510 | 0.8078 | 0.8086 | | 0.1992 | 37.57 | 6800 | 0.5469 | 0.8102 | 0.8107 | | 0.1888 | 38.67 | 7000 | 0.5835 | 0.8086 | 0.8096 | | 0.188 | 39.78 | 7200 | 0.5681 | 0.8132 | 0.8141 | | 0.1853 | 40.88 | 7400 | 0.5798 | 0.8029 | 0.8037 | | 0.1798 | 41.99 | 7600 | 0.5693 | 0.8074 | 0.8086 | | 0.1779 | 43.09 | 7800 | 0.5952 | 0.8127 | 0.8135 | | 0.1745 | 44.2 | 8000 | 0.5988 | 0.8070 | 0.8076 | | 0.171 | 45.3 | 8200 | 0.5874 | 0.8056 | 0.8062 | | 0.1648 | 46.41 | 8400 | 0.6126 | 0.8043 | 0.8055 | | 0.1695 | 47.51 | 8600 | 0.6173 | 0.8072 | 0.8083 | | 0.1622 | 48.62 | 8800 | 0.6059 | 0.8049 | 0.8055 | | 0.1594 | 49.72 | 9000 | 0.6308 | 0.8064 | 0.8076 | | 0.1633 | 50.83 | 9200 | 0.6171 | 0.8004 | 0.8017 | | 0.1542 | 51.93 | 9400 | 0.6232 | 0.8114 | 0.8121 | | 0.1529 | 53.04 | 9600 | 0.6267 | 0.8081 | 0.8089 | | 0.1544 | 54.14 | 9800 | 0.6244 | 0.8083 | 0.8089 | | 0.1524 | 55.25 | 10000 | 0.6277 | 0.8082 | 0.8089 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:12:33+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5118 - F1 Score: 0.7666 - Accuracy: 0.7674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5989 | 1.01 | 200 | 0.5886 | 0.7054 | 0.7102 | | 0.5718 | 2.02 | 400 | 0.5668 | 0.7254 | 0.7273 | | 0.5573 | 3.03 | 600 | 0.5544 | 0.7293 | 0.7317 | | 0.5494 | 4.04 | 800 | 0.5505 | 0.7390 | 0.7412 | | 0.5421 | 5.05 | 1000 | 0.5408 | 0.7439 | 0.7453 | | 0.5367 | 6.06 | 1200 | 0.5384 | 0.7451 | 0.7475 | | 0.5328 | 7.07 | 1400 | 0.5390 | 0.7483 | 0.7506 | | 0.5322 | 8.08 | 1600 | 0.5394 | 0.7446 | 0.7475 | | 0.5283 | 9.09 | 1800 | 0.5305 | 0.7548 | 0.7566 | | 0.525 | 10.1 | 2000 | 0.5294 | 0.7526 | 0.7541 | | 0.5226 | 11.11 | 2200 | 0.5340 | 0.7504 | 0.7522 | | 0.5216 | 12.12 | 2400 | 0.5258 | 0.7542 | 0.7554 | | 0.5188 | 13.13 | 2600 | 0.5317 | 0.7531 | 0.7551 | | 0.5189 | 14.14 | 2800 | 0.5259 | 0.7528 | 0.7547 | | 0.5161 | 15.15 | 3000 | 0.5287 | 0.7537 | 0.7557 | | 0.5174 | 16.16 | 3200 | 0.5241 | 0.7537 | 0.7560 | | 0.5135 | 17.17 | 3400 | 0.5300 | 0.7546 | 0.7563 | | 0.5155 | 18.18 | 3600 | 0.5182 | 0.7628 | 0.7639 | | 0.5124 | 19.19 | 3800 | 0.5212 | 0.7585 | 0.7601 | | 0.5101 | 20.2 | 4000 | 0.5210 | 0.7597 | 0.7610 | | 0.5075 | 21.21 | 4200 | 0.5264 | 0.7525 | 0.7551 | | 0.5097 | 22.22 | 4400 | 0.5239 | 0.7587 | 0.7604 | | 0.5046 | 23.23 | 4600 | 0.5246 | 0.7530 | 0.7554 | | 0.5118 | 24.24 | 4800 | 0.5209 | 0.7508 | 0.7538 | | 0.5044 | 25.25 | 5000 | 0.5164 | 0.7600 | 0.7610 | | 0.5067 | 26.26 | 5200 | 0.5184 | 0.7642 | 0.7648 | | 0.5034 | 27.27 | 5400 | 0.5183 | 0.7579 | 0.7598 | | 0.5061 | 28.28 | 5600 | 0.5151 | 0.7618 | 0.7626 | | 0.505 | 29.29 | 5800 | 0.5236 | 0.7526 | 0.7560 | | 0.4997 | 30.3 | 6000 | 0.5172 | 0.7578 | 0.7598 | | 0.5028 | 31.31 | 6200 | 0.5198 | 0.7574 | 0.7592 | | 0.5023 | 32.32 | 6400 | 0.5236 | 0.7536 | 0.7566 | | 0.4991 | 33.33 | 6600 | 0.5221 | 0.7544 | 0.7569 | | 0.4986 | 34.34 | 6800 | 0.5186 | 0.7566 | 0.7588 | | 0.4967 | 35.35 | 7000 | 0.5191 | 0.7574 | 0.7592 | | 0.5004 | 36.36 | 7200 | 0.5165 | 0.7574 | 0.7595 | | 0.5001 | 37.37 | 7400 | 0.5180 | 0.7551 | 0.7576 | | 0.499 | 38.38 | 7600 | 0.5176 | 0.7611 | 0.7623 | | 0.4986 | 39.39 | 7800 | 0.5171 | 0.7564 | 0.7582 | | 0.4977 | 40.4 | 8000 | 0.5209 | 0.7565 | 0.7585 | | 0.4964 | 41.41 | 8200 | 0.5190 | 0.7546 | 0.7573 | | 0.5 | 42.42 | 8400 | 0.5204 | 0.7543 | 0.7573 | | 0.4965 | 43.43 | 8600 | 0.5198 | 0.7548 | 0.7573 | | 0.4928 | 44.44 | 8800 | 0.5181 | 0.7585 | 0.7604 | | 0.4953 | 45.45 | 9000 | 0.5175 | 0.7570 | 0.7588 | | 0.4932 | 46.46 | 9200 | 0.5196 | 0.7571 | 0.7592 | | 0.4999 | 47.47 | 9400 | 0.5202 | 0.7530 | 0.7560 | | 0.4888 | 48.48 | 9600 | 0.5202 | 0.7543 | 0.7566 | | 0.5001 | 49.49 | 9800 | 0.5192 | 0.7541 | 0.7566 | | 0.4915 | 50.51 | 10000 | 0.5186 | 0.7550 | 0.7573 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:13:10+00:00
text-generation
transformers
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Andrew Chahnwoo Park - **Model type:** LLaMA - **Language(s) (NLP):** English - **License:** apache-2.0 - **Finetuned from model:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### Model Sources - **Repository:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](TinyLlama/TinyLlama-1.1B-Chat-v1.0) - **GitHub:** [TinyLlama](https://github.com/jzhang38/TinyLlama) ## Training Details ### Training Data [DataBricks Instruction-Tuning Dataset](databricks/databricks-dolly-15k) (5% utilized) ### Training Procedure 1. Tokenize and label data 2. Load LLM 3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules ["q_proj","k_proj","v_proj","o_proj"] 4. Perform training with HuggingFace Trainer 5. Use DataCollatorForSeq2Seq - Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined "labels" - This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/main/src/transformers/data/data_collator.py#L634) #### Preprocessing Utilized different instruction prompt templates for each category in the dataset. ##### open_qa ### Instruction: Answer the question below. Be as specific and concise as possible. ### Question: {instruction} ### Response: {response} ##### general_qa ### Instruction: Answer the question below to the best of your konwledge. ### Question: {instruction} ### Response: {response} ##### classification ### Instruction: You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices. ### Question: {instruction} ### Response: {response} ##### closed_qa ### Instruction: You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context. ### Question: {instruction} ### Context: {context} ### Response: {response} ##### brainstorming ### Instruction: You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question. ### Question: {instruction} ### Response: {response} ##### information_extraction ### Instruction: You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query. ### Question: {instruction} ### Context: {context} ### Response: {response} ##### summarization ### Instruction: You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question. ### Question: {instruction} ### Context: {context} ### Response: {response} ##### creative_writing ### Instruction: You will be given a prompt that you are to write about. Be creative. ### Prompt: {instruction} ### Response: {response}""" #### Labelled Data Format { 'input_ids' : List[int], 'attention_mask' : List[int], 'labels' : List[int] } Where labels were created by masking everything but the "response" with the mask token (-100) ### Hardware Fine-tuning performed on Google Colab on a single session (T4). Dataset not fully utilized due to limitations of free session.
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["databricks/databricks-dolly-15k"]}
Chahnwoo/TinyLlama-1.1B-Chat-v1.0-0.05E-QLoRA-Databricks-SFT-Test_20240430
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:databricks/databricks-dolly-15k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T02:13:13+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_name-finetuned-squad This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 78 | 3.3801 | | No log | 2.0 | 156 | 2.9967 | | No log | 3.0 | 234 | 2.9280 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "aubmindlab/bert-base-arabertv2", "model-index": [{"name": "model_name-finetuned-squad", "results": []}]}
omarezz/model_name-finetuned-squad
null
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv2", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:13:30+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5147 - F1 Score: 0.7679 - Accuracy: 0.7699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5898 | 1.01 | 200 | 0.5640 | 0.7306 | 0.7336 | | 0.5489 | 2.02 | 400 | 0.5417 | 0.7422 | 0.7443 | | 0.5352 | 3.03 | 600 | 0.5263 | 0.7522 | 0.7538 | | 0.5276 | 4.04 | 800 | 0.5279 | 0.7562 | 0.7576 | | 0.5221 | 5.05 | 1000 | 0.5233 | 0.7606 | 0.7614 | | 0.5164 | 6.06 | 1200 | 0.5190 | 0.7576 | 0.7592 | | 0.5115 | 7.07 | 1400 | 0.5254 | 0.7556 | 0.7579 | | 0.5099 | 8.08 | 1600 | 0.5241 | 0.7487 | 0.7516 | | 0.505 | 9.09 | 1800 | 0.5134 | 0.7596 | 0.7610 | | 0.5001 | 10.1 | 2000 | 0.5164 | 0.7553 | 0.7573 | | 0.495 | 11.11 | 2200 | 0.5267 | 0.7543 | 0.7566 | | 0.4942 | 12.12 | 2400 | 0.5144 | 0.7605 | 0.7620 | | 0.4898 | 13.13 | 2600 | 0.5187 | 0.7552 | 0.7585 | | 0.4888 | 14.14 | 2800 | 0.5149 | 0.7563 | 0.7592 | | 0.4832 | 15.15 | 3000 | 0.5146 | 0.7586 | 0.7610 | | 0.4832 | 16.16 | 3200 | 0.5145 | 0.7548 | 0.7579 | | 0.4795 | 17.17 | 3400 | 0.5196 | 0.7602 | 0.7620 | | 0.4782 | 18.18 | 3600 | 0.5096 | 0.7612 | 0.7626 | | 0.4723 | 19.19 | 3800 | 0.5127 | 0.7566 | 0.7585 | | 0.4661 | 20.2 | 4000 | 0.5137 | 0.7615 | 0.7636 | | 0.4686 | 21.21 | 4200 | 0.5153 | 0.7540 | 0.7576 | | 0.4631 | 22.22 | 4400 | 0.5181 | 0.7639 | 0.7655 | | 0.4572 | 23.23 | 4600 | 0.5282 | 0.7586 | 0.7604 | | 0.4657 | 24.24 | 4800 | 0.5198 | 0.7531 | 0.7569 | | 0.4568 | 25.25 | 5000 | 0.5150 | 0.7582 | 0.7592 | | 0.459 | 26.26 | 5200 | 0.5173 | 0.7583 | 0.7585 | | 0.4514 | 27.27 | 5400 | 0.5218 | 0.7532 | 0.7563 | | 0.4525 | 28.28 | 5600 | 0.5156 | 0.7584 | 0.7595 | | 0.4516 | 29.29 | 5800 | 0.5225 | 0.7556 | 0.7592 | | 0.444 | 30.3 | 6000 | 0.5216 | 0.7584 | 0.7604 | | 0.4464 | 31.31 | 6200 | 0.5201 | 0.7618 | 0.7633 | | 0.4466 | 32.32 | 6400 | 0.5273 | 0.7549 | 0.7579 | | 0.4416 | 33.33 | 6600 | 0.5285 | 0.7575 | 0.7607 | | 0.4398 | 34.34 | 6800 | 0.5214 | 0.7587 | 0.7604 | | 0.4359 | 35.35 | 7000 | 0.5268 | 0.7616 | 0.7633 | | 0.4401 | 36.36 | 7200 | 0.5264 | 0.7524 | 0.7547 | | 0.4372 | 37.37 | 7400 | 0.5277 | 0.7555 | 0.7579 | | 0.4357 | 38.38 | 7600 | 0.5222 | 0.7609 | 0.7620 | | 0.4321 | 39.39 | 7800 | 0.5293 | 0.7580 | 0.7592 | | 0.4335 | 40.4 | 8000 | 0.5301 | 0.7584 | 0.7601 | | 0.4316 | 41.41 | 8200 | 0.5335 | 0.7565 | 0.7598 | | 0.4344 | 42.42 | 8400 | 0.5316 | 0.7565 | 0.7588 | | 0.4274 | 43.43 | 8600 | 0.5326 | 0.7546 | 0.7569 | | 0.4268 | 44.44 | 8800 | 0.5300 | 0.7575 | 0.7595 | | 0.4267 | 45.45 | 9000 | 0.5297 | 0.7584 | 0.7601 | | 0.4275 | 46.46 | 9200 | 0.5324 | 0.7602 | 0.7620 | | 0.429 | 47.47 | 9400 | 0.5347 | 0.7515 | 0.7547 | | 0.4189 | 48.48 | 9600 | 0.5337 | 0.7569 | 0.7592 | | 0.4321 | 49.49 | 9800 | 0.5317 | 0.7564 | 0.7588 | | 0.4227 | 50.51 | 10000 | 0.5316 | 0.7551 | 0.7573 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:13:48+00:00
null
null
{"license": "apache-2.0"}
thiagalvao/caneca
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T02:16:23+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - yuffish/colon-04 This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"}
yuffish/colon-04
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T02:16:26+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tduch/gemma-7b-it-alex-street
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:17:45+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zinoli/image_text
null
[ "transformers", "safetensors", "blip", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:18:40+00:00
null
null
{}
ichini99/TES
null
[ "region:us" ]
null
2024-04-30T02:20:09+00:00
null
null
{}
jdqwoi/TooManyMixed-LLM_04.gguf
null
[ "gguf", "region:us" ]
null
2024-04-30T02:21:56+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
baaaaaaaam/v1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:23:30+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Talhat/summarizationTest
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:23:37+00:00
null
null
<!-- WEASEL: AUTO-GENERATED DOCS START (do not remove) --> # 🪐 Weasel Project: Citations of ECFR Banking Regulation in a spaCy pipeline. Custom text classification project for spaCy v3 adapted from the spaCy v3 ## 📋 project.yml The [`project.yml`](project.yml) defines the data assets required by the project, as well as the available commands and workflows. For details, see the [Weasel documentation](https://github.com/explosion/weasel). ### ⏯ Commands The following commands are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run). Commands are only re-run if their inputs have changed. | Command | Description | | --- | --- | | `format-script` | Execute the Python script `firstStep-format.py`, which performs the initial formatting of a dataset file for the first step of the project. This script extracts text and labels from a dataset file in JSONL format and writes them to a new JSONL file in a specific format. Usage: ``` spacy project run execute-first-step-format-script ``` Explanation: - The script `firstStep-format.py` reads data from the file specified in the `dataset_file` variable (`data/train200.jsonl` by default). - It extracts text and labels from each JSON object in the dataset file. - If both text and at least one label are available, it writes a new JSON object to the output file specified in the `output_file` variable (`data/firstStep_file.jsonl` by default) with the extracted text and label. - If either text or label is missing in a JSON object, a warning message is printed. - Upon completion, the script prints a message confirming the processing and the path to the output file. | | `train-text-classification-model` | Train the text classification model for the second step of the project using the `secondStep-score.py` script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step. Usage: ``` spacy project run train-text-classification-model ``` Explanation: - The script `secondStep-score.py` loads a blank English spaCy model and adds a text classification pipeline to it. - It reads processed data from the file specified in the `processed_data_file` variable (`data/firstStep_file.jsonl` by default). - The processed data is converted to spaCy format for training the model. - The model is trained using the converted data for a specified number of iterations (`n_iter`). - Losses are printed for each iteration during training. - Upon completion, the trained model is saved to the specified output directory (`./my_trained_model` by default). | | `classify-unlabeled-data` | Classify the unlabeled data for the third step of the project using the `thirdStep-label.py` script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset. Usage: ``` spacy project run classify-unlabeled-data ``` Explanation: - The script `thirdStep-label.py` loads the trained spaCy model from the specified model directory (`./my_trained_model` by default). - It reads the unlabeled data from the file specified in the `unlabeled_data_file` variable (`data/train.jsonl` by default). - Each record in the unlabeled data is classified using the loaded model. - The predicted labels for each record are extracted and stored along with the text. - The classified data is optionally saved to a file specified in the `output_file` variable (`data/thirdStep_file.jsonl` by default). | | `format-labeled-data` | Format the labeled data for the final step of the project using the `finalStep-formatLabel.py` script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance. Usage: ``` spacy project run format-labeled-data ``` Explanation: - The script `finalStep-formatLabel.py` reads classified data from the file specified in the `input_file` variable (`data/thirdStep_file.jsonl` by default). - For each record, it determines accepted categories based on a specified threshold. - It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information. - The transformed data is written to the file specified in the `output_file` variable (`data/train4465.jsonl` by default). | | `setup-environment` | Set up the Python virtual environment. | | `review-evaluation-data` | Review the evaluation data in Prodigy and automatically accept annotations. Usage: ``` spacy project run review-evaluation-data ``` Explanation: - The command reviews the evaluation data in Prodigy. - It automatically accepts annotations made during the review process. - Only sessions allowed by the environment variable PRODIGY_ALLOWED_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed. | | `export-reviewed-evaluation-data` | Export the reviewed evaluation data from Prodigy to a JSONL file named 'goldenEval.jsonl'. Usage: ``` spacy project run export-reviewed-evaluation-data ``` Explanation: - The command exports the reviewed evaluation data from Prodigy to a JSONL file. - The data is exported from the Prodigy database associated with the project named 'project3eval-review'. - The exported data is saved to the file 'goldenEval.jsonl'. - This command helps in preserving the reviewed annotations for further analysis or processing. | | `import-training-data` | Import the training data into Prodigy from a JSONL file named 'train200.jsonl'. Usage: ``` spacy project run import-training-data ``` Explanation: - The command imports the training data into Prodigy from the specified JSONL file. - The data is imported into the Prodigy database associated with the project named 'prodigy3train'. - This command prepares the training data for annotation and model training in Prodigy. | | `import-golden-evaluation-data` | Import the golden evaluation data into Prodigy from a JSONL file named 'goldeneval.jsonl'. Usage: ``` spacy project run import-golden-evaluation-data ``` Explanation: - The command imports the golden evaluation data into Prodigy from the specified JSONL file. - The data is imported into the Prodigy database associated with the project named 'golden3'. - This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy. | | `train-model-experiment1` | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'. Usage: ``` spacy project run train-model-experiment1 ``` Explanation: - The command trains a text classification model using Prodigy. - It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset. - The trained model is saved to the './output/experiment1' directory. | | `download-model` | Download the English language model 'en_core_web_lg' from spaCy. Usage: ``` spacy project run download-model ``` Explanation: - The command downloads the English language model 'en_core_web_lg' from spaCy. - This model is used as the base model for further data processing and training in the project. | | `convert-data-to-spacy-format` | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets. Usage: ``` spacy project run convert-data-to-spacy-format ``` Explanation: - The command converts the annotated data from Prodigy to spaCy format. - It uses the 'prodigy3train' and 'golden3' datasets for conversion. - The converted data is saved to the './corpus' directory with the base model 'en_core_web_lg'. | | `train-custom-model` | Train a custom text classification model using spaCy with the converted data in spaCy format. Usage: ``` spacy project run train-custom-model ``` Explanation: - The command trains a custom text classification model using spaCy. - It uses the converted data in spaCy format located in the './corpus' directory. - The model is trained using the configuration defined in 'corpus/config.cfg'. | ### ⏭ Workflows The following workflows are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run) and will run the specified commands in order. Commands are only re-run if their inputs have changed. | Workflow | Steps | | --- | --- | | `all` | `format-script` &rarr; `train-text-classification-model` &rarr; `classify-unlabeled-data` &rarr; `format-labeled-data` &rarr; `setup-environment` &rarr; `review-evaluation-data` &rarr; `export-reviewed-evaluation-data` &rarr; `import-training-data` &rarr; `import-golden-evaluation-data` &rarr; `train-model-experiment1` &rarr; `download-model` &rarr; `convert-data-to-spacy-format` &rarr; `train-custom-model` | ### 🗂 Assets The following assets are defined by the project. They can be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets) in the project directory. | File | Source | Description | | --- | --- | --- | | [`corpus/labels/ner.json`](corpus/labels/ner.json) | Local | JSON file containing NER labels | | [`corpus/labels/parser.json`](corpus/labels/parser.json) | Local | JSON file containing parser labels | | [`corpus/labels/tagger.json`](corpus/labels/tagger.json) | Local | JSON file containing tagger labels | | [`corpus/labels/textcat_multilabel.json`](corpus/labels/textcat_multilabel.json) | Local | JSON file containing multilabel text classification labels | | [`data/eval.jsonl`](data/eval.jsonl) | Local | JSONL file containing evaluation data | | [`data/firstStep_file.jsonl`](data/firstStep_file.jsonl) | Local | JSONL file containing formatted data from the first step | | `data/five_examples_annotated5.jsonl` | Local | JSONL file containing five annotated examples | | [`data/goldenEval.jsonl`](data/goldenEval.jsonl) | Local | JSONL file containing golden evaluation data | | [`data/thirdStep_file.jsonl`](data/thirdStep_file.jsonl) | Local | JSONL file containing classified data from the third step | | [`data/train.jsonl`](data/train.jsonl) | Local | JSONL file containing training data | | [`data/train200.jsonl`](data/train200.jsonl) | Local | JSONL file containing initial training data | | [`data/train4465.jsonl`](data/train4465.jsonl) | Local | JSONL file containing formatted and labeled training data | | [`my_trained_model/textcat_multilabel/cfg`](my_trained_model/textcat_multilabel/cfg) | Local | Configuration files for the text classification model | | [`my_trained_model/textcat_multilabel/model`](my_trained_model/textcat_multilabel/model) | Local | Trained model files for the text classification model | | [`my_trained_model/vocab/key2row`](my_trained_model/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary | | [`my_trained_model/vocab/lookups.bin`](my_trained_model/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary | | [`my_trained_model/vocab/strings.json`](my_trained_model/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary | | [`my_trained_model/vocab/vectors`](my_trained_model/vocab/vectors) | Local | Directory containing vector files for the vocabulary | | [`my_trained_model/vocab/vectors.cfg`](my_trained_model/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary | | [`my_trained_model/config.cfg`](my_trained_model/config.cfg) | Local | Configuration file for the trained model | | [`my_trained_model/meta.json`](my_trained_model/meta.json) | Local | JSON file containing metadata for the trained model | | [`my_trained_model/tokenizer`](my_trained_model/tokenizer) | Local | Tokenizer files for the trained model | | [`output/experiment1/model-best/textcat_multilabel/cfg`](output/experiment1/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 1 | | [`output/experiment1/model-best/textcat_multilabel/model`](output/experiment1/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 1 | | [`output/experiment1/model-best/vocab/key2row`](output/experiment1/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 1 | | [`output/experiment1/model-best/vocab/lookups.bin`](output/experiment1/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 1 | | [`output/experiment1/model-best/vocab/strings.json`](output/experiment1/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 1 | | [`output/experiment1/model-best/vocab/vectors`](output/experiment1/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 1 | | [`output/experiment1/model-best/vocab/vectors.cfg`](output/experiment1/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 1 | | [`output/experiment1/model-best/config.cfg`](output/experiment1/model-best/config.cfg) | Local | Configuration file for the best model in experiment 1 | | [`output/experiment1/model-best/meta.json`](output/experiment1/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 1 | | [`output/experiment1/model-best/tokenizer`](output/experiment1/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 1 | | [`output/experiment1/model-last/textcat_multilabel/cfg`](output/experiment1/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 1 | | [`output/experiment1/model-last/textcat_multilabel/model`](output/experiment1/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 1 | | [`output/experiment1/model-last/vocab/key2row`](output/experiment1/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 1 | | [`output/experiment1/model-last/vocab/lookups.bin`](output/experiment1/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 1 | | [`output/experiment1/model-last/vocab/strings.json`](output/experiment1/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 1 | | [`output/experiment1/model-last/vocab/vectors`](output/experiment1/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 1 | | [`output/experiment1/model-last/vocab/vectors.cfg`](output/experiment1/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 1 | | [`output/experiment1/model-last/config.cfg`](output/experiment1/model-last/config.cfg) | Local | Configuration file for the last model in experiment 1 | | [`output/experiment1/model-last/meta.json`](output/experiment1/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 1 | | [`output/experiment1/model-last/tokenizer`](output/experiment1/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 1 | | [`output/experiment3/model-best/textcat_multilabel/cfg`](output/experiment3/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 3 | | [`output/experiment3/model-best/textcat_multilabel/model`](output/experiment3/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 3 | | [`output/experiment3/model-best/vocab/key2row`](output/experiment3/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 3 | | [`output/experiment3/model-best/vocab/lookups.bin`](output/experiment3/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 3 | | [`output/experiment3/model-best/vocab/strings.json`](output/experiment3/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 3 | | [`output/experiment3/model-best/vocab/vectors`](output/experiment3/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 3 | | [`output/experiment3/model-best/vocab/vectors.cfg`](output/experiment3/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 3 | | [`output/experiment3/model-best/config.cfg`](output/experiment3/model-best/config.cfg) | Local | Configuration file for the best model in experiment 3 | | [`output/experiment3/model-best/meta.json`](output/experiment3/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 3 | | [`output/experiment3/model-best/tokenizer`](output/experiment3/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 3 | | [`output/experiment3/model-last/textcat_multilabel/cfg`](output/experiment3/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 3 | | [`output/experiment3/model-last/textcat_multilabel/model`](output/experiment3/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 3 | | [`output/experiment3/model-last/vocab/key2row`](output/experiment3/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 3 | | [`output/experiment3/model-last/vocab/lookups.bin`](output/experiment3/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 3 | | [`output/experiment3/model-last/vocab/strings.json`](output/experiment3/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 3 | | [`output/experiment3/model-last/vocab/vectors`](output/experiment3/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 3 | | [`output/experiment3/model-last/vocab/vectors.cfg`](output/experiment3/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 3 | | [`output/experiment3/model-last/config.cfg`](output/experiment3/model-last/config.cfg) | Local | Configuration file for the last model in experiment 3 | | [`output/experiment3/model-last/meta.json`](output/experiment3/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 3 | | [`output/experiment3/model-last/tokenizer`](output/experiment3/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 3 | | [`python_Code/finalStep-formatLabel.py`](python_Code/finalStep-formatLabel.py) | Local | Python script for formatting labeled data in the final step | | [`python_Code/firstStep-format.py`](python_Code/firstStep-format.py) | Local | Python script for formatting data in the first step | | [`python_Code/five_examples_annotated.ipynb`](python_Code/five_examples_annotated.ipynb) | Local | Jupyter notebook containing five annotated examples | | [`python_Code/secondStep-score.py`](python_Code/secondStep-score.py) | Local | Python script for scoring data in the second step | | [`python_Code/thirdStep-label.py`](python_Code/thirdStep-label.py) | Local | Python script for labeling data in the third step | | [`python_Code/train_eval_split.ipynb`](python_Code/train_eval_split.ipynb) | Local | Jupyter notebook for training and evaluation data splitting | | [`TerminalCode.txt`](TerminalCode.txt) | Local | Text file containing terminal code | | [`README.md`](README.md) | Local | Markdown file containing project documentation | | [`prodigy.json`](prodigy.json) | Local | JSON file containing Prodigy configuration | <!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
{"language": "en", "tags": ["machine learning", "natural language processing", "huggingface"]}
DagimB/ecfr-textcat
null
[ "machine learning", "natural language processing", "huggingface", "en", "region:us" ]
null
2024-04-30T02:24:02+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.4-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:24:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-dpo-full-sft-wo-healthsearch_qa This model is a fine-tuned version of [Minbyul/mistral-7b-wo-healthsearch_qa-sft](https://huggingface.co/Minbyul/mistral-7b-wo-healthsearch_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6746 - Rewards/chosen: -0.0204 - Rewards/rejected: -0.0600 - Rewards/accuracies: 0.6612 - Rewards/margins: 0.0395 - Logps/rejected: -1091.8407 - Logps/chosen: -817.4551 - Logits/rejected: -2.8353 - Logits/chosen: -2.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/mistral-7b-wo-healthsearch_qa-sft", "model-index": [{"name": "mistral-7b-dpo-full-sft-wo-healthsearch_qa", "results": []}]}
Minbyul/mistral-7b-dpo-full-sft-wo-healthsearch_qa
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/mistral-7b-wo-healthsearch_qa-sft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:26:01+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA12 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5515 | 0.09 | 10 | 0.1735 | | 0.1665 | 0.18 | 20 | 0.1565 | | 0.1531 | 0.27 | 30 | 0.1592 | | 0.1558 | 0.36 | 40 | 0.1489 | | 0.1489 | 0.45 | 50 | 0.1490 | | 0.1518 | 0.54 | 60 | 0.1497 | | 0.1517 | 0.63 | 70 | 0.1472 | | 0.1485 | 0.73 | 80 | 0.1536 | | 0.1467 | 0.82 | 90 | 0.1476 | | 0.15 | 0.91 | 100 | 0.1674 | | 0.1763 | 1.0 | 110 | 0.1856 | | 1.0647 | 1.09 | 120 | 8.3962 | | 5.0664 | 1.18 | 130 | 1.3023 | | 1.0961 | 1.27 | 140 | 0.9335 | | 0.6186 | 1.36 | 150 | 0.4091 | | 0.41 | 1.45 | 160 | 0.4651 | | 0.3489 | 1.54 | 170 | 0.2977 | | 0.2826 | 1.63 | 180 | 0.2353 | | 0.2238 | 1.72 | 190 | 0.2088 | | 0.1962 | 1.81 | 200 | 0.1988 | | 0.1893 | 1.9 | 210 | 0.1917 | | 0.1879 | 1.99 | 220 | 0.1814 | | 0.173 | 2.08 | 230 | 0.1894 | | 0.1753 | 2.18 | 240 | 0.1669 | | 0.1573 | 2.27 | 250 | 0.1580 | | 0.1531 | 2.36 | 260 | 0.1547 | | 0.1429 | 2.45 | 270 | 0.1496 | | 0.1464 | 2.54 | 280 | 0.1471 | | 0.1387 | 2.63 | 290 | 0.1482 | | 0.1414 | 2.72 | 300 | 0.1460 | | 0.1477 | 2.81 | 310 | 0.1461 | | 0.1425 | 2.9 | 320 | 0.1466 | | 0.1399 | 2.99 | 330 | 0.1467 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA12", "results": []}]}
Litzy619/O0428HMA12
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:26:20+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/r9zwfd1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:30:06+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/20pj7c8
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:32:19+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lleticiasilvaa/1B-datasetMenor-10epochs
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:32:37+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model13
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:32:38+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA22 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4529 | 0.09 | 10 | 0.1637 | | 0.1607 | 0.18 | 20 | 0.1594 | | 0.1523 | 0.27 | 30 | 0.1619 | | 0.1562 | 0.36 | 40 | 0.1498 | | 0.1516 | 0.45 | 50 | 0.1536 | | 0.1533 | 0.54 | 60 | 0.1494 | | 0.1507 | 0.63 | 70 | 0.1481 | | 0.1494 | 0.73 | 80 | 0.1566 | | 0.1481 | 0.82 | 90 | 0.1476 | | 0.1486 | 0.91 | 100 | 0.1493 | | 0.1506 | 1.0 | 110 | 0.1496 | | 0.1464 | 1.09 | 120 | 0.1483 | | 0.1465 | 1.18 | 130 | 0.1523 | | 0.148 | 1.27 | 140 | 0.1493 | | 0.1512 | 1.36 | 150 | 0.1502 | | 0.147 | 1.45 | 160 | 0.1495 | | 0.1453 | 1.54 | 170 | 0.1470 | | 0.1477 | 1.63 | 180 | 0.1460 | | 0.1476 | 1.72 | 190 | 0.1500 | | 0.145 | 1.81 | 200 | 0.1482 | | 0.1483 | 1.9 | 210 | 0.1451 | | 0.139 | 1.99 | 220 | 0.1258 | | 0.0991 | 2.08 | 230 | 0.0957 | | 0.1018 | 2.18 | 240 | 0.0760 | | 0.0642 | 2.27 | 250 | 0.0672 | | 0.0644 | 2.36 | 260 | 0.0607 | | 0.0533 | 2.45 | 270 | 0.0558 | | 0.0475 | 2.54 | 280 | 0.0542 | | 0.0509 | 2.63 | 290 | 0.0499 | | 0.0512 | 2.72 | 300 | 0.0486 | | 0.0478 | 2.81 | 310 | 0.0488 | | 0.0466 | 2.9 | 320 | 0.0471 | | 0.0504 | 2.99 | 330 | 0.0467 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA22", "results": []}]}
Litzy619/O0428HMA22
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:33:33+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA21 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0514 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4312 | 0.09 | 10 | 0.1993 | | 0.1645 | 0.18 | 20 | 0.1553 | | 0.1493 | 0.27 | 30 | 0.1641 | | 0.1576 | 0.36 | 40 | 0.1525 | | 0.1525 | 0.45 | 50 | 0.1490 | | 0.1538 | 0.54 | 60 | 0.1493 | | 0.1506 | 0.63 | 70 | 0.1472 | | 0.1497 | 0.73 | 80 | 0.1536 | | 0.1472 | 0.82 | 90 | 0.1494 | | 0.1484 | 0.91 | 100 | 0.1478 | | 0.1422 | 1.0 | 110 | 0.1043 | | 0.6143 | 1.09 | 120 | 0.1460 | | 0.1612 | 1.18 | 130 | 0.1327 | | 0.1067 | 1.27 | 140 | 0.0796 | | 0.3298 | 1.36 | 150 | 0.0890 | | 0.0715 | 1.45 | 160 | 0.0631 | | 0.0578 | 1.54 | 170 | 0.0577 | | 0.0614 | 1.63 | 180 | 0.0570 | | 0.063 | 1.72 | 190 | 0.0554 | | 0.0561 | 1.81 | 200 | 0.0554 | | 0.0561 | 1.9 | 210 | 0.0580 | | 0.0568 | 1.99 | 220 | 0.0554 | | 0.0559 | 2.08 | 230 | 0.0528 | | 0.0546 | 2.18 | 240 | 0.0597 | | 0.0577 | 2.27 | 250 | 0.0600 | | 0.0592 | 2.36 | 260 | 0.0560 | | 0.0547 | 2.45 | 270 | 0.0537 | | 0.0517 | 2.54 | 280 | 0.0530 | | 0.0524 | 2.63 | 290 | 0.0541 | | 0.0532 | 2.72 | 300 | 0.0514 | | 0.0531 | 2.81 | 310 | 0.0512 | | 0.0546 | 2.9 | 320 | 0.0514 | | 0.0547 | 2.99 | 330 | 0.0514 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA21", "results": []}]}
Litzy619/O0428HMA21
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:33:35+00:00
null
null
{}
Litzy619/O0428HMA23
null
[ "region:us" ]
null
2024-04-30T02:34:14+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA24 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3605 | 0.09 | 10 | 0.1809 | | 0.1688 | 0.18 | 20 | 0.1604 | | 0.1494 | 0.27 | 30 | 0.1601 | | 0.1569 | 0.36 | 40 | 0.1538 | | 0.1533 | 0.45 | 50 | 0.1535 | | 0.1529 | 0.54 | 60 | 0.1502 | | 0.1499 | 0.63 | 70 | 0.1480 | | 0.15 | 0.73 | 80 | 0.1548 | | 0.1475 | 0.82 | 90 | 0.1495 | | 0.1479 | 0.91 | 100 | 0.1459 | | 0.1355 | 1.0 | 110 | 0.1022 | | 0.2371 | 1.09 | 120 | 0.1226 | | 0.1134 | 1.18 | 130 | 0.0893 | | 0.0964 | 1.27 | 140 | 0.0853 | | 0.0865 | 1.36 | 150 | 0.0728 | | 0.0896 | 1.45 | 160 | 0.0597 | | 0.0643 | 1.54 | 170 | 0.0606 | | 0.0606 | 1.63 | 180 | 0.0574 | | 0.0631 | 1.72 | 190 | 0.0569 | | 0.0577 | 1.81 | 200 | 0.0625 | | 0.0584 | 1.9 | 210 | 0.0613 | | 0.0601 | 1.99 | 220 | 0.0564 | | 0.0582 | 2.08 | 230 | 0.0578 | | 0.0548 | 2.18 | 240 | 0.0587 | | 0.0561 | 2.27 | 250 | 0.0592 | | 0.061 | 2.36 | 260 | 0.0571 | | 0.0534 | 2.45 | 270 | 0.0559 | | 0.052 | 2.54 | 280 | 0.0556 | | 0.0549 | 2.63 | 290 | 0.0571 | | 0.0568 | 2.72 | 300 | 0.0551 | | 0.0567 | 2.81 | 310 | 0.0549 | | 0.0577 | 2.9 | 320 | 0.0551 | | 0.0607 | 2.99 | 330 | 0.0551 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA24", "results": []}]}
Litzy619/O0428HMA24
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:34:21+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-facebook-bart-large-xsum-on-samsum This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5249 - Rouge1: 50.3616 - Rouge2: 25.1246 - Rougel: 41.214 - Rougelsum: 46.1946 - Gen Len: 26.423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 0.11 | 100 | 1.5514 | 49.1738 | 23.682 | 40.0793 | 44.8382 | 26.0818 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large-xsum", "model-index": [{"name": "ft-facebook-bart-large-xsum-on-samsum", "results": []}]}
mrami010/ft-facebook-bart-large-xsum-on-samsum
null
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-xsum", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:35:08+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5103 - F1 Score: 0.7719 - Accuracy: 0.7727 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5815 | 1.01 | 200 | 0.5460 | 0.7425 | 0.7437 | | 0.5388 | 2.02 | 400 | 0.5339 | 0.7539 | 0.7557 | | 0.5267 | 3.03 | 600 | 0.5203 | 0.7634 | 0.7642 | | 0.5157 | 4.04 | 800 | 0.5263 | 0.7570 | 0.7592 | | 0.5107 | 5.05 | 1000 | 0.5209 | 0.7684 | 0.7689 | | 0.503 | 6.06 | 1200 | 0.5166 | 0.7622 | 0.7636 | | 0.4945 | 7.07 | 1400 | 0.5246 | 0.7603 | 0.7623 | | 0.491 | 8.08 | 1600 | 0.5230 | 0.7634 | 0.7645 | | 0.4814 | 9.09 | 1800 | 0.5138 | 0.7632 | 0.7648 | | 0.4748 | 10.1 | 2000 | 0.5255 | 0.7538 | 0.7563 | | 0.4648 | 11.11 | 2200 | 0.5249 | 0.7613 | 0.7629 | | 0.4588 | 12.12 | 2400 | 0.5281 | 0.7497 | 0.7509 | | 0.4516 | 13.13 | 2600 | 0.5384 | 0.7542 | 0.7573 | | 0.447 | 14.14 | 2800 | 0.5295 | 0.7590 | 0.7598 | | 0.4346 | 15.15 | 3000 | 0.5380 | 0.7577 | 0.7579 | | 0.4293 | 16.16 | 3200 | 0.5431 | 0.7446 | 0.7456 | | 0.422 | 17.17 | 3400 | 0.5519 | 0.7602 | 0.7610 | | 0.4181 | 18.18 | 3600 | 0.5535 | 0.7426 | 0.7456 | | 0.4024 | 19.19 | 3800 | 0.5521 | 0.7456 | 0.7472 | | 0.3964 | 20.2 | 4000 | 0.5623 | 0.7467 | 0.7481 | | 0.3941 | 21.21 | 4200 | 0.5572 | 0.7504 | 0.7519 | | 0.3824 | 22.22 | 4400 | 0.5833 | 0.7475 | 0.7478 | | 0.3755 | 23.23 | 4600 | 0.5835 | 0.7469 | 0.7472 | | 0.3746 | 24.24 | 4800 | 0.5921 | 0.7447 | 0.7472 | | 0.3647 | 25.25 | 5000 | 0.5953 | 0.7334 | 0.7333 | | 0.3623 | 26.26 | 5200 | 0.5986 | 0.7351 | 0.7355 | | 0.3515 | 27.27 | 5400 | 0.6126 | 0.7301 | 0.7323 | | 0.3485 | 28.28 | 5600 | 0.6078 | 0.7370 | 0.7380 | | 0.3441 | 29.29 | 5800 | 0.6272 | 0.7363 | 0.7371 | | 0.3326 | 30.3 | 6000 | 0.6436 | 0.7388 | 0.7386 | | 0.3347 | 31.31 | 6200 | 0.6255 | 0.7368 | 0.7377 | | 0.3316 | 32.32 | 6400 | 0.6361 | 0.7294 | 0.7311 | | 0.3216 | 33.33 | 6600 | 0.6443 | 0.7279 | 0.7301 | | 0.3179 | 34.34 | 6800 | 0.6395 | 0.7278 | 0.7282 | | 0.3067 | 35.35 | 7000 | 0.6541 | 0.7329 | 0.7333 | | 0.3097 | 36.36 | 7200 | 0.6668 | 0.7239 | 0.7251 | | 0.3056 | 37.37 | 7400 | 0.6633 | 0.7266 | 0.7282 | | 0.3005 | 38.38 | 7600 | 0.6693 | 0.7229 | 0.7232 | | 0.2895 | 39.39 | 7800 | 0.6951 | 0.7264 | 0.7266 | | 0.2925 | 40.4 | 8000 | 0.6964 | 0.7239 | 0.7244 | | 0.2902 | 41.41 | 8200 | 0.6895 | 0.7276 | 0.7295 | | 0.2883 | 42.42 | 8400 | 0.7034 | 0.7224 | 0.7244 | | 0.2851 | 43.43 | 8600 | 0.7049 | 0.7226 | 0.7235 | | 0.2807 | 44.44 | 8800 | 0.7085 | 0.7212 | 0.7219 | | 0.2805 | 45.45 | 9000 | 0.7033 | 0.7229 | 0.7241 | | 0.2813 | 46.46 | 9200 | 0.7042 | 0.7242 | 0.7247 | | 0.2779 | 47.47 | 9400 | 0.7097 | 0.7203 | 0.7213 | | 0.2705 | 48.48 | 9600 | 0.7155 | 0.7222 | 0.7229 | | 0.2768 | 49.49 | 9800 | 0.7125 | 0.7171 | 0.7181 | | 0.2705 | 50.51 | 10000 | 0.7124 | 0.7231 | 0.7238 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:35:24+00:00
null
null
{}
RichardHu0307/sadsad
null
[ "region:us" ]
null
2024-04-30T02:35:25+00:00
sentence-similarity
peft
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.6470, 0.1619], [0.0786, 0.5844]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
{"language": ["en"], "license": "mit", "library_name": "peft", "tags": ["text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "LLM2Vec-Meta-Llama-3-supervised", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.94029850746269}, {"type": "ap", "value": 44.93223506764482}, {"type": "f1", "value": 74.30328994013465}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 86.06680000000001}, {"type": "ap", "value": 81.97124658709345}, {"type": "f1", "value": 86.00558036874241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 46.836}, {"type": "f1", "value": 46.05094679201488}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.980000000000004}, {"type": "map_at_10", "value": 54.167}, {"type": "map_at_100", "value": 54.735}, {"type": "map_at_1000", "value": 54.738}, {"type": "map_at_3", "value": 49.384}, {"type": "map_at_5", "value": 52.285000000000004}, {"type": "mrr_at_1", "value": 38.549}, {"type": "mrr_at_10", "value": 54.351000000000006}, {"type": "mrr_at_100", "value": 54.932}, {"type": "mrr_at_1000", "value": 54.935}, {"type": "mrr_at_3", "value": 49.585}, {"type": "mrr_at_5", "value": 52.469}, {"type": "ndcg_at_1", "value": 37.980000000000004}, {"type": "ndcg_at_10", "value": 62.778999999999996}, {"type": "ndcg_at_100", "value": 64.986}, {"type": "ndcg_at_1000", "value": 65.036}, {"type": "ndcg_at_3", "value": 53.086999999999996}, {"type": "ndcg_at_5", "value": 58.263}, {"type": "precision_at_1", "value": 37.980000000000004}, {"type": "precision_at_10", "value": 9.011}, {"type": "precision_at_100", "value": 0.993}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 21.266}, {"type": "precision_at_5", "value": 15.248999999999999}, {"type": "recall_at_1", "value": 37.980000000000004}, {"type": "recall_at_10", "value": 90.114}, {"type": "recall_at_100", "value": 99.289}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 63.798}, {"type": "recall_at_5", "value": 76.24499999999999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 44.27081216556421}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 46.8490872532913}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 65.18525400430678}, {"type": "mrr", "value": 78.80149936244119}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.92301936595548}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 88.0487012987013}, {"type": "f1", "value": 88.00953788281542}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 32.34687321141145}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 36.69881680534123}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "cqadupstack/android", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.742}, {"type": "map_at_10", "value": 51.803}, {"type": "map_at_100", "value": 53.556000000000004}, {"type": "map_at_1000", "value": 53.652}, {"type": "map_at_3", "value": 47.286}, {"type": "map_at_5", "value": 50.126000000000005}, {"type": "mrr_at_1", "value": 46.924}, {"type": "mrr_at_10", "value": 57.857}, {"type": "mrr_at_100", "value": 58.592}, {"type": "mrr_at_1000", "value": 58.619}, {"type": "mrr_at_3", "value": 55.340999999999994}, {"type": "mrr_at_5", "value": 57.150999999999996}, {"type": "ndcg_at_1", "value": 46.924}, {"type": "ndcg_at_10", "value": 58.733999999999995}, {"type": "ndcg_at_100", "value": 63.771}, {"type": "ndcg_at_1000", "value": 64.934}, {"type": "ndcg_at_3", "value": 53.189}, {"type": "ndcg_at_5", "value": 56.381}, {"type": "precision_at_1", "value": 46.924}, {"type": "precision_at_10", "value": 11.431}, {"type": "precision_at_100", "value": 1.73}, {"type": "precision_at_1000", "value": 0.213}, {"type": "precision_at_3", "value": 25.942}, {"type": "precision_at_5", "value": 19.113}, {"type": "recall_at_1", "value": 37.742}, {"type": "recall_at_10", "value": 71.34}, {"type": "recall_at_100", "value": 91.523}, {"type": "recall_at_1000", "value": 98.494}, {"type": "recall_at_3", "value": 55.443}, {"type": "recall_at_5", "value": 64.122}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "cqadupstack/english", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 34.183}, {"type": "map_at_10", "value": 46.837}, {"type": "map_at_100", "value": 48.126000000000005}, {"type": "map_at_1000", "value": 48.25}, {"type": "map_at_3", "value": 43.171}, {"type": "map_at_5", "value": 45.318999999999996}, {"type": "mrr_at_1", "value": 43.376}, {"type": "mrr_at_10", "value": 52.859}, {"type": "mrr_at_100", "value": 53.422000000000004}, {"type": "mrr_at_1000", "value": 53.456}, {"type": "mrr_at_3", "value": 50.434999999999995}, {"type": "mrr_at_5", "value": 51.861999999999995}, {"type": "ndcg_at_1", "value": 43.376}, {"type": "ndcg_at_10", "value": 53.223}, {"type": "ndcg_at_100", "value": 57.175}, {"type": "ndcg_at_1000", "value": 58.86900000000001}, {"type": "ndcg_at_3", "value": 48.417}, {"type": "ndcg_at_5", "value": 50.77}, {"type": "precision_at_1", "value": 43.376}, {"type": "precision_at_10", "value": 10.236}, {"type": "precision_at_100", "value": 1.5730000000000002}, {"type": "precision_at_1000", "value": 0.203}, {"type": "precision_at_3", "value": 23.97}, {"type": "precision_at_5", "value": 17.134}, {"type": "recall_at_1", "value": 34.183}, {"type": "recall_at_10", "value": 64.866}, {"type": "recall_at_100", "value": 81.26100000000001}, {"type": "recall_at_1000", "value": 91.412}, {"type": "recall_at_3", "value": 50.080000000000005}, {"type": "recall_at_5", "value": 56.871}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "cqadupstack/gaming", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 44.878}, {"type": "map_at_10", "value": 58.656}, {"type": "map_at_100", "value": 59.668}, {"type": "map_at_1000", "value": 59.704}, {"type": "map_at_3", "value": 54.891}, {"type": "map_at_5", "value": 57.050999999999995}, {"type": "mrr_at_1", "value": 51.975}, {"type": "mrr_at_10", "value": 62.357}, {"type": "mrr_at_100", "value": 62.907999999999994}, {"type": "mrr_at_1000", "value": 62.925}, {"type": "mrr_at_3", "value": 59.801}, {"type": "mrr_at_5", "value": 61.278}, {"type": "ndcg_at_1", "value": 51.975}, {"type": "ndcg_at_10", "value": 64.95100000000001}, {"type": "ndcg_at_100", "value": 68.414}, {"type": "ndcg_at_1000", "value": 69.077}, {"type": "ndcg_at_3", "value": 58.897999999999996}, {"type": "ndcg_at_5", "value": 61.866}, {"type": "precision_at_1", "value": 51.975}, {"type": "precision_at_10", "value": 10.502}, {"type": "precision_at_100", "value": 1.31}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 26.290000000000003}, {"type": "precision_at_5", "value": 18.093999999999998}, {"type": "recall_at_1", "value": 44.878}, {"type": "recall_at_10", "value": 79.746}, {"type": "recall_at_100", "value": 94.17}, {"type": "recall_at_1000", "value": 98.80499999999999}, {"type": "recall_at_3", "value": 63.70099999999999}, {"type": "recall_at_5", "value": 70.878}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "cqadupstack/gis", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.807}, {"type": "map_at_10", "value": 39.431}, {"type": "map_at_100", "value": 40.56}, {"type": "map_at_1000", "value": 40.617999999999995}, {"type": "map_at_3", "value": 36.436}, {"type": "map_at_5", "value": 37.955}, {"type": "mrr_at_1", "value": 31.186000000000003}, {"type": "mrr_at_10", "value": 41.654}, {"type": "mrr_at_100", "value": 42.58}, {"type": "mrr_at_1000", "value": 42.623}, {"type": "mrr_at_3", "value": 38.983000000000004}, {"type": "mrr_at_5", "value": 40.35}, {"type": "ndcg_at_1", "value": 31.186000000000003}, {"type": "ndcg_at_10", "value": 45.297}, {"type": "ndcg_at_100", "value": 50.515}, {"type": "ndcg_at_1000", "value": 52.005}, {"type": "ndcg_at_3", "value": 39.602}, {"type": "ndcg_at_5", "value": 42.027}, {"type": "precision_at_1", "value": 31.186000000000003}, {"type": "precision_at_10", "value": 7.073}, {"type": "precision_at_100", "value": 1.0210000000000001}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 17.1}, {"type": "precision_at_5", "value": 11.729000000000001}, {"type": "recall_at_1", "value": 28.807}, {"type": "recall_at_10", "value": 61.138999999999996}, {"type": "recall_at_100", "value": 84.491}, {"type": "recall_at_1000", "value": 95.651}, {"type": "recall_at_3", "value": 45.652}, {"type": "recall_at_5", "value": 51.522}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "cqadupstack/mathematica", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.607}, {"type": "map_at_10", "value": 31.944}, {"type": "map_at_100", "value": 33.317}, {"type": "map_at_1000", "value": 33.428000000000004}, {"type": "map_at_3", "value": 28.508}, {"type": "map_at_5", "value": 30.348999999999997}, {"type": "mrr_at_1", "value": 25.622}, {"type": "mrr_at_10", "value": 36.726}, {"type": "mrr_at_100", "value": 37.707}, {"type": "mrr_at_1000", "value": 37.761}, {"type": "mrr_at_3", "value": 33.934}, {"type": "mrr_at_5", "value": 35.452}, {"type": "ndcg_at_1", "value": 25.622}, {"type": "ndcg_at_10", "value": 38.462}, {"type": "ndcg_at_100", "value": 44.327}, {"type": "ndcg_at_1000", "value": 46.623}, {"type": "ndcg_at_3", "value": 32.583}, {"type": "ndcg_at_5", "value": 35.175}, {"type": "precision_at_1", "value": 25.622}, {"type": "precision_at_10", "value": 7.425}, {"type": "precision_at_100", "value": 1.173}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 16.418}, {"type": "precision_at_5", "value": 11.866}, {"type": "recall_at_1", "value": 20.607}, {"type": "recall_at_10", "value": 53.337}, {"type": "recall_at_100", "value": 78.133}, {"type": "recall_at_1000", "value": 94.151}, {"type": "recall_at_3", "value": 37.088}, {"type": "recall_at_5", "value": 43.627}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "cqadupstack/physics", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 33.814}, {"type": "map_at_10", "value": 47.609}, {"type": "map_at_100", "value": 48.972}, {"type": "map_at_1000", "value": 49.061}, {"type": "map_at_3", "value": 43.397999999999996}, {"type": "map_at_5", "value": 45.839}, {"type": "mrr_at_1", "value": 42.059999999999995}, {"type": "mrr_at_10", "value": 53.074}, {"type": "mrr_at_100", "value": 53.76800000000001}, {"type": "mrr_at_1000", "value": 53.794}, {"type": "mrr_at_3", "value": 50.241}, {"type": "mrr_at_5", "value": 51.805}, {"type": "ndcg_at_1", "value": 42.059999999999995}, {"type": "ndcg_at_10", "value": 54.419}, {"type": "ndcg_at_100", "value": 59.508}, {"type": "ndcg_at_1000", "value": 60.858000000000004}, {"type": "ndcg_at_3", "value": 48.296}, {"type": "ndcg_at_5", "value": 51.28}, {"type": "precision_at_1", "value": 42.059999999999995}, {"type": "precision_at_10", "value": 10.231}, {"type": "precision_at_100", "value": 1.4789999999999999}, {"type": "precision_at_1000", "value": 0.17700000000000002}, {"type": "precision_at_3", "value": 23.419999999999998}, {"type": "precision_at_5", "value": 16.843}, {"type": "recall_at_1", "value": 33.814}, {"type": "recall_at_10", "value": 68.88}, {"type": "recall_at_100", "value": 89.794}, {"type": "recall_at_1000", "value": 98.058}, {"type": "recall_at_3", "value": 51.915}, {"type": "recall_at_5", "value": 59.704}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "cqadupstack/programmers", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.668}, {"type": "map_at_10", "value": 43.032}, {"type": "map_at_100", "value": 44.48}, {"type": "map_at_1000", "value": 44.574000000000005}, {"type": "map_at_3", "value": 38.609}, {"type": "map_at_5", "value": 41.164}, {"type": "mrr_at_1", "value": 37.785000000000004}, {"type": "mrr_at_10", "value": 48.898}, {"type": "mrr_at_100", "value": 49.728}, {"type": "mrr_at_1000", "value": 49.769000000000005}, {"type": "mrr_at_3", "value": 45.909}, {"type": "mrr_at_5", "value": 47.61}, {"type": "ndcg_at_1", "value": 37.785000000000004}, {"type": "ndcg_at_10", "value": 50.21099999999999}, {"type": "ndcg_at_100", "value": 55.657999999999994}, {"type": "ndcg_at_1000", "value": 57.172}, {"type": "ndcg_at_3", "value": 43.726}, {"type": "ndcg_at_5", "value": 46.758}, {"type": "precision_at_1", "value": 37.785000000000004}, {"type": "precision_at_10", "value": 9.669}, {"type": "precision_at_100", "value": 1.4409999999999998}, {"type": "precision_at_1000", "value": 0.174}, {"type": "precision_at_3", "value": 21.651}, {"type": "precision_at_5", "value": 15.822}, {"type": "recall_at_1", "value": 29.668}, {"type": "recall_at_10", "value": 65.575}, {"type": "recall_at_100", "value": 87.977}, {"type": "recall_at_1000", "value": 97.615}, {"type": "recall_at_3", "value": 47.251}, {"type": "recall_at_5", "value": 55.359}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "mteb/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.29925}, {"type": "map_at_10", "value": 41.98708333333333}, {"type": "map_at_100", "value": 43.306916666666666}, {"type": "map_at_1000", "value": 43.40716666666667}, {"type": "map_at_3", "value": 38.431666666666665}, {"type": "map_at_5", "value": 40.4195}, {"type": "mrr_at_1", "value": 36.24483333333334}, {"type": "mrr_at_10", "value": 46.32666666666667}, {"type": "mrr_at_100", "value": 47.13983333333333}, {"type": "mrr_at_1000", "value": 47.18058333333334}, {"type": "mrr_at_3", "value": 43.66799999999999}, {"type": "mrr_at_5", "value": 45.163666666666664}, {"type": "ndcg_at_1", "value": 36.24483333333334}, {"type": "ndcg_at_10", "value": 48.251916666666666}, {"type": "ndcg_at_100", "value": 53.3555}, {"type": "ndcg_at_1000", "value": 55.024249999999995}, {"type": "ndcg_at_3", "value": 42.599583333333335}, {"type": "ndcg_at_5", "value": 45.24166666666666}, {"type": "precision_at_1", "value": 36.24483333333334}, {"type": "precision_at_10", "value": 8.666833333333333}, {"type": "precision_at_100", "value": 1.3214166666666665}, {"type": "precision_at_1000", "value": 0.16475}, {"type": "precision_at_3", "value": 19.9955}, {"type": "precision_at_5", "value": 14.271999999999998}, {"type": "recall_at_1", "value": 30.29925}, {"type": "recall_at_10", "value": 62.232333333333344}, {"type": "recall_at_100", "value": 84.151}, {"type": "recall_at_1000", "value": 95.37333333333333}, {"type": "recall_at_3", "value": 46.45541666666667}, {"type": "recall_at_5", "value": 53.264}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "cqadupstack/stats", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.996}, {"type": "map_at_10", "value": 38.047}, {"type": "map_at_100", "value": 39.121}, {"type": "map_at_1000", "value": 39.202999999999996}, {"type": "map_at_3", "value": 35.376000000000005}, {"type": "map_at_5", "value": 36.763}, {"type": "mrr_at_1", "value": 32.362}, {"type": "mrr_at_10", "value": 40.717999999999996}, {"type": "mrr_at_100", "value": 41.586}, {"type": "mrr_at_1000", "value": 41.641}, {"type": "mrr_at_3", "value": 38.292}, {"type": "mrr_at_5", "value": 39.657}, {"type": "ndcg_at_1", "value": 32.362}, {"type": "ndcg_at_10", "value": 43.105}, {"type": "ndcg_at_100", "value": 48.026}, {"type": "ndcg_at_1000", "value": 49.998}, {"type": "ndcg_at_3", "value": 38.147999999999996}, {"type": "ndcg_at_5", "value": 40.385}, {"type": "precision_at_1", "value": 32.362}, {"type": "precision_at_10", "value": 6.7940000000000005}, {"type": "precision_at_100", "value": 1.0170000000000001}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 16.411}, {"type": "precision_at_5", "value": 11.35}, {"type": "recall_at_1", "value": 28.996}, {"type": "recall_at_10", "value": 55.955}, {"type": "recall_at_100", "value": 77.744}, {"type": "recall_at_1000", "value": 92.196}, {"type": "recall_at_3", "value": 42.254999999999995}, {"type": "recall_at_5", "value": 47.776}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "cqadupstack/tex", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.029}, {"type": "map_at_10", "value": 29.188}, {"type": "map_at_100", "value": 30.484}, {"type": "map_at_1000", "value": 30.608}, {"type": "map_at_3", "value": 26.195}, {"type": "map_at_5", "value": 27.866999999999997}, {"type": "mrr_at_1", "value": 24.57}, {"type": "mrr_at_10", "value": 33.461}, {"type": "mrr_at_100", "value": 34.398}, {"type": "mrr_at_1000", "value": 34.464}, {"type": "mrr_at_3", "value": 30.856}, {"type": "mrr_at_5", "value": 32.322}, {"type": "ndcg_at_1", "value": 24.57}, {"type": "ndcg_at_10", "value": 34.846}, {"type": "ndcg_at_100", "value": 40.544000000000004}, {"type": "ndcg_at_1000", "value": 43.019}, {"type": "ndcg_at_3", "value": 29.683999999999997}, {"type": "ndcg_at_5", "value": 32.11}, {"type": "precision_at_1", "value": 24.57}, {"type": "precision_at_10", "value": 6.535}, {"type": "precision_at_100", "value": 1.11}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 14.338000000000001}, {"type": "precision_at_5", "value": 10.496}, {"type": "recall_at_1", "value": 20.029}, {"type": "recall_at_10", "value": 47.509}, {"type": "recall_at_100", "value": 72.61999999999999}, {"type": "recall_at_1000", "value": 89.778}, {"type": "recall_at_3", "value": 33.031}, {"type": "recall_at_5", "value": 39.306000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "cqadupstack/unix", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.753999999999998}, {"type": "map_at_10", "value": 43.814}, {"type": "map_at_100", "value": 45.072}, {"type": "map_at_1000", "value": 45.155}, {"type": "map_at_3", "value": 40.316}, {"type": "map_at_5", "value": 42.15}, {"type": "mrr_at_1", "value": 38.06}, {"type": "mrr_at_10", "value": 48.311}, {"type": "mrr_at_100", "value": 49.145}, {"type": "mrr_at_1000", "value": 49.181000000000004}, {"type": "mrr_at_3", "value": 45.678000000000004}, {"type": "mrr_at_5", "value": 47.072}, {"type": "ndcg_at_1", "value": 38.06}, {"type": "ndcg_at_10", "value": 50.083}, {"type": "ndcg_at_100", "value": 55.342}, {"type": "ndcg_at_1000", "value": 56.87}, {"type": "ndcg_at_3", "value": 44.513999999999996}, {"type": "ndcg_at_5", "value": 46.886}, {"type": "precision_at_1", "value": 38.06}, {"type": "precision_at_10", "value": 8.638}, {"type": "precision_at_100", "value": 1.253}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 20.709}, {"type": "precision_at_5", "value": 14.44}, {"type": "recall_at_1", "value": 31.753999999999998}, {"type": "recall_at_10", "value": 64.473}, {"type": "recall_at_100", "value": 86.832}, {"type": "recall_at_1000", "value": 96.706}, {"type": "recall_at_3", "value": 48.937000000000005}, {"type": "recall_at_5", "value": 55.214}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "cqadupstack/webmasters", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.815}, {"type": "map_at_10", "value": 40.595}, {"type": "map_at_100", "value": 42.337}, {"type": "map_at_1000", "value": 42.559000000000005}, {"type": "map_at_3", "value": 37.120999999999995}, {"type": "map_at_5", "value": 38.912}, {"type": "mrr_at_1", "value": 34.585}, {"type": "mrr_at_10", "value": 45.068000000000005}, {"type": "mrr_at_100", "value": 45.93}, {"type": "mrr_at_1000", "value": 45.974}, {"type": "mrr_at_3", "value": 42.26}, {"type": "mrr_at_5", "value": 43.742}, {"type": "ndcg_at_1", "value": 34.585}, {"type": "ndcg_at_10", "value": 47.519}, {"type": "ndcg_at_100", "value": 53.102000000000004}, {"type": "ndcg_at_1000", "value": 54.949999999999996}, {"type": "ndcg_at_3", "value": 41.719}, {"type": "ndcg_at_5", "value": 44.17}, {"type": "precision_at_1", "value": 34.585}, {"type": "precision_at_10", "value": 9.368}, {"type": "precision_at_100", "value": 1.7870000000000001}, {"type": "precision_at_1000", "value": 0.254}, {"type": "precision_at_3", "value": 19.895}, {"type": "precision_at_5", "value": 14.506}, {"type": "recall_at_1", "value": 28.815}, {"type": "recall_at_10", "value": 61.414}, {"type": "recall_at_100", "value": 85.922}, {"type": "recall_at_1000", "value": 97.15}, {"type": "recall_at_3", "value": 45.076}, {"type": "recall_at_5", "value": 51.271}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "cqadupstack/wordpress", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 24.298000000000002}, {"type": "map_at_10", "value": 32.889}, {"type": "map_at_100", "value": 33.989999999999995}, {"type": "map_at_1000", "value": 34.074}, {"type": "map_at_3", "value": 29.873}, {"type": "map_at_5", "value": 31.539}, {"type": "mrr_at_1", "value": 26.433}, {"type": "mrr_at_10", "value": 34.937000000000005}, {"type": "mrr_at_100", "value": 35.914}, {"type": "mrr_at_1000", "value": 35.96}, {"type": "mrr_at_3", "value": 32.286}, {"type": "mrr_at_5", "value": 33.663}, {"type": "ndcg_at_1", "value": 26.433}, {"type": "ndcg_at_10", "value": 38.173}, {"type": "ndcg_at_100", "value": 43.884}, {"type": "ndcg_at_1000", "value": 45.916000000000004}, {"type": "ndcg_at_3", "value": 32.419}, {"type": "ndcg_at_5", "value": 35.092}, {"type": "precision_at_1", "value": 26.433}, {"type": "precision_at_10", "value": 6.1}, {"type": "precision_at_100", "value": 0.963}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 13.802}, {"type": "precision_at_5", "value": 9.871}, {"type": "recall_at_1", "value": 24.298000000000002}, {"type": "recall_at_10", "value": 52.554}, {"type": "recall_at_100", "value": 79.345}, {"type": "recall_at_1000", "value": 94.464}, {"type": "recall_at_3", "value": 37.036}, {"type": "recall_at_5", "value": 43.518}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 14.194999999999999}, {"type": "map_at_10", "value": 24.563}, {"type": "map_at_100", "value": 26.775}, {"type": "map_at_1000", "value": 26.965}, {"type": "map_at_3", "value": 19.983999999999998}, {"type": "map_at_5", "value": 22.24}, {"type": "mrr_at_1", "value": 31.661}, {"type": "mrr_at_10", "value": 44.804}, {"type": "mrr_at_100", "value": 45.655}, {"type": "mrr_at_1000", "value": 45.678000000000004}, {"type": "mrr_at_3", "value": 41.292}, {"type": "mrr_at_5", "value": 43.468}, {"type": "ndcg_at_1", "value": 31.661}, {"type": "ndcg_at_10", "value": 34.271}, {"type": "ndcg_at_100", "value": 42.04}, {"type": "ndcg_at_1000", "value": 45.101}, {"type": "ndcg_at_3", "value": 27.529999999999998}, {"type": "ndcg_at_5", "value": 29.862}, {"type": "precision_at_1", "value": 31.661}, {"type": "precision_at_10", "value": 10.925}, {"type": "precision_at_100", "value": 1.92}, {"type": "precision_at_1000", "value": 0.25}, {"type": "precision_at_3", "value": 20.456}, {"type": "precision_at_5", "value": 16.012999999999998}, {"type": "recall_at_1", "value": 14.194999999999999}, {"type": "recall_at_10", "value": 41.388999999999996}, {"type": "recall_at_100", "value": 67.58800000000001}, {"type": "recall_at_1000", "value": 84.283}, {"type": "recall_at_3", "value": 25.089}, {"type": "recall_at_5", "value": 31.642}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.898}, {"type": "map_at_10", "value": 23.226}, {"type": "map_at_100", "value": 33.372}, {"type": "map_at_1000", "value": 35.407}, {"type": "map_at_3", "value": 15.892999999999999}, {"type": "map_at_5", "value": 18.747}, {"type": "mrr_at_1", "value": 73.5}, {"type": "mrr_at_10", "value": 80.404}, {"type": "mrr_at_100", "value": 80.671}, {"type": "mrr_at_1000", "value": 80.676}, {"type": "mrr_at_3", "value": 78.958}, {"type": "mrr_at_5", "value": 79.683}, {"type": "ndcg_at_1", "value": 62.0}, {"type": "ndcg_at_10", "value": 48.337}, {"type": "ndcg_at_100", "value": 53.474}, {"type": "ndcg_at_1000", "value": 60.999}, {"type": "ndcg_at_3", "value": 52.538}, {"type": "ndcg_at_5", "value": 49.659}, {"type": "precision_at_1", "value": 73.5}, {"type": "precision_at_10", "value": 39.25}, {"type": "precision_at_100", "value": 12.4}, {"type": "precision_at_1000", "value": 2.4459999999999997}, {"type": "precision_at_3", "value": 56.333}, {"type": "precision_at_5", "value": 48.15}, {"type": "recall_at_1", "value": 9.898}, {"type": "recall_at_10", "value": 29.511}, {"type": "recall_at_100", "value": 60.45700000000001}, {"type": "recall_at_1000", "value": 84.47200000000001}, {"type": "recall_at_3", "value": 17.064}, {"type": "recall_at_5", "value": 21.258}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 51.19999999999999}, {"type": "f1", "value": 46.23854137552949}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 80.093}, {"type": "map_at_10", "value": 87.139}, {"type": "map_at_100", "value": 87.333}, {"type": "map_at_1000", "value": 87.344}, {"type": "map_at_3", "value": 86.395}, {"type": "map_at_5", "value": 86.866}, {"type": "mrr_at_1", "value": 86.36399999999999}, {"type": "mrr_at_10", "value": 91.867}, {"type": "mrr_at_100", "value": 91.906}, {"type": "mrr_at_1000", "value": 91.90700000000001}, {"type": "mrr_at_3", "value": 91.484}, {"type": "mrr_at_5", "value": 91.759}, {"type": "ndcg_at_1", "value": 86.36399999999999}, {"type": "ndcg_at_10", "value": 90.197}, {"type": "ndcg_at_100", "value": 90.819}, {"type": "ndcg_at_1000", "value": 91.01599999999999}, {"type": "ndcg_at_3", "value": 89.166}, {"type": "ndcg_at_5", "value": 89.74}, {"type": "precision_at_1", "value": 86.36399999999999}, {"type": "precision_at_10", "value": 10.537}, {"type": "precision_at_100", "value": 1.106}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 33.608}, {"type": "precision_at_5", "value": 20.618}, {"type": "recall_at_1", "value": 80.093}, {"type": "recall_at_10", "value": 95.003}, {"type": "recall_at_100", "value": 97.328}, {"type": "recall_at_1000", "value": 98.485}, {"type": "recall_at_3", "value": 92.072}, {"type": "recall_at_5", "value": 93.661}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.063}, {"type": "map_at_10", "value": 47.113}, {"type": "map_at_100", "value": 49.294}, {"type": "map_at_1000", "value": 49.422}, {"type": "map_at_3", "value": 40.955000000000005}, {"type": "map_at_5", "value": 44.5}, {"type": "mrr_at_1", "value": 55.401}, {"type": "mrr_at_10", "value": 62.99400000000001}, {"type": "mrr_at_100", "value": 63.63999999999999}, {"type": "mrr_at_1000", "value": 63.661}, {"type": "mrr_at_3", "value": 61.034}, {"type": "mrr_at_5", "value": 62.253}, {"type": "ndcg_at_1", "value": 55.401}, {"type": "ndcg_at_10", "value": 55.332}, {"type": "ndcg_at_100", "value": 61.931000000000004}, {"type": "ndcg_at_1000", "value": 63.841}, {"type": "ndcg_at_3", "value": 50.92}, {"type": "ndcg_at_5", "value": 52.525}, {"type": "precision_at_1", "value": 55.401}, {"type": "precision_at_10", "value": 15.262}, {"type": "precision_at_100", "value": 2.231}, {"type": "precision_at_1000", "value": 0.256}, {"type": "precision_at_3", "value": 33.848}, {"type": "precision_at_5", "value": 25.031}, {"type": "recall_at_1", "value": 29.063}, {"type": "recall_at_10", "value": 62.498}, {"type": "recall_at_100", "value": 85.86}, {"type": "recall_at_1000", "value": 97.409}, {"type": "recall_at_3", "value": 45.472}, {"type": "recall_at_5", "value": 53.344}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.205}, {"type": "map_at_10", "value": 64.19399999999999}, {"type": "map_at_100", "value": 65.183}, {"type": "map_at_1000", "value": 65.23299999999999}, {"type": "map_at_3", "value": 60.239}, {"type": "map_at_5", "value": 62.695}, {"type": "mrr_at_1", "value": 74.409}, {"type": "mrr_at_10", "value": 80.84}, {"type": "mrr_at_100", "value": 81.10199999999999}, {"type": "mrr_at_1000", "value": 81.109}, {"type": "mrr_at_3", "value": 79.739}, {"type": "mrr_at_5", "value": 80.46600000000001}, {"type": "ndcg_at_1", "value": 74.409}, {"type": "ndcg_at_10", "value": 71.757}, {"type": "ndcg_at_100", "value": 75.152}, {"type": "ndcg_at_1000", "value": 76.098}, {"type": "ndcg_at_3", "value": 66.174}, {"type": "ndcg_at_5", "value": 69.283}, {"type": "precision_at_1", "value": 74.409}, {"type": "precision_at_10", "value": 15.503}, {"type": "precision_at_100", "value": 1.8110000000000002}, {"type": "precision_at_1000", "value": 0.194}, {"type": "precision_at_3", "value": 43.457}, {"type": "precision_at_5", "value": 28.532000000000004}, {"type": "recall_at_1", "value": 37.205}, {"type": "recall_at_10", "value": 77.515}, {"type": "recall_at_100", "value": 90.56}, {"type": "recall_at_1000", "value": 96.759}, {"type": "recall_at_3", "value": 65.18599999999999}, {"type": "recall_at_5", "value": 71.33}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 82.9448}, {"type": "ap", "value": 78.25923353099166}, {"type": "f1", "value": 82.86422040179993}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.834}, {"type": "map_at_10", "value": 35.85}, {"type": "map_at_100", "value": 37.013}, {"type": "map_at_1000", "value": 37.056}, {"type": "map_at_3", "value": 31.613000000000003}, {"type": "map_at_5", "value": 34.113}, {"type": "mrr_at_1", "value": 23.424}, {"type": "mrr_at_10", "value": 36.398}, {"type": "mrr_at_100", "value": 37.498}, {"type": "mrr_at_1000", "value": 37.534}, {"type": "mrr_at_3", "value": 32.275999999999996}, {"type": "mrr_at_5", "value": 34.705000000000005}, {"type": "ndcg_at_1", "value": 23.424}, {"type": "ndcg_at_10", "value": 43.236999999999995}, {"type": "ndcg_at_100", "value": 48.776}, {"type": "ndcg_at_1000", "value": 49.778}, {"type": "ndcg_at_3", "value": 34.692}, {"type": "ndcg_at_5", "value": 39.119}, {"type": "precision_at_1", "value": 23.424}, {"type": "precision_at_10", "value": 6.918}, {"type": "precision_at_100", "value": 0.9690000000000001}, {"type": "precision_at_1000", "value": 0.105}, {"type": "precision_at_3", "value": 14.881}, {"type": "precision_at_5", "value": 11.183}, {"type": "recall_at_1", "value": 22.834}, {"type": "recall_at_10", "value": 66.03999999999999}, {"type": "recall_at_100", "value": 91.532}, {"type": "recall_at_1000", "value": 99.068}, {"type": "recall_at_3", "value": 42.936}, {"type": "recall_at_5", "value": 53.539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 96.1377108983128}, {"type": "f1", "value": 95.87034720246666}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 86.10579115367078}, {"type": "f1", "value": 70.20810321445228}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 79.80497646267652}, {"type": "f1", "value": 77.32475274059293}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 81.52320107599192}, {"type": "f1", "value": 81.22312939311655}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 30.709106678767018}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 32.95879128399585}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.67476691128679}, {"type": "mrr", "value": 33.921654478513986}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.223}, {"type": "map_at_10", "value": 15.992999999999999}, {"type": "map_at_100", "value": 21.09}, {"type": "map_at_1000", "value": 22.822}, {"type": "map_at_3", "value": 11.475}, {"type": "map_at_5", "value": 13.501}, {"type": "mrr_at_1", "value": 53.251000000000005}, {"type": "mrr_at_10", "value": 61.878}, {"type": "mrr_at_100", "value": 62.307}, {"type": "mrr_at_1000", "value": 62.342}, {"type": "mrr_at_3", "value": 60.01}, {"type": "mrr_at_5", "value": 61.202}, {"type": "ndcg_at_1", "value": 51.702999999999996}, {"type": "ndcg_at_10", "value": 41.833999999999996}, {"type": "ndcg_at_100", "value": 39.061}, {"type": "ndcg_at_1000", "value": 47.397}, {"type": "ndcg_at_3", "value": 47.083000000000006}, {"type": "ndcg_at_5", "value": 44.722}, {"type": "precision_at_1", "value": 53.251000000000005}, {"type": "precision_at_10", "value": 31.3}, {"type": "precision_at_100", "value": 10.254000000000001}, {"type": "precision_at_1000", "value": 2.338}, {"type": "precision_at_3", "value": 43.756}, {"type": "precision_at_5", "value": 38.824}, {"type": "recall_at_1", "value": 7.223}, {"type": "recall_at_10", "value": 20.529}, {"type": "recall_at_100", "value": 39.818}, {"type": "recall_at_1000", "value": 70.152}, {"type": "recall_at_3", "value": 12.666}, {"type": "recall_at_5", "value": 15.798000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 38.847}, {"type": "map_at_10", "value": 56.255}, {"type": "map_at_100", "value": 57.019}, {"type": "map_at_1000", "value": 57.03}, {"type": "map_at_3", "value": 51.665000000000006}, {"type": "map_at_5", "value": 54.543}, {"type": "mrr_at_1", "value": 43.801}, {"type": "mrr_at_10", "value": 58.733999999999995}, {"type": "mrr_at_100", "value": 59.206}, {"type": "mrr_at_1000", "value": 59.21300000000001}, {"type": "mrr_at_3", "value": 55.266999999999996}, {"type": "mrr_at_5", "value": 57.449}, {"type": "ndcg_at_1", "value": 43.772}, {"type": "ndcg_at_10", "value": 64.213}, {"type": "ndcg_at_100", "value": 67.13}, {"type": "ndcg_at_1000", "value": 67.368}, {"type": "ndcg_at_3", "value": 55.977}, {"type": "ndcg_at_5", "value": 60.597}, {"type": "precision_at_1", "value": 43.772}, {"type": "precision_at_10", "value": 10.272}, {"type": "precision_at_100", "value": 1.193}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 25.261}, {"type": "precision_at_5", "value": 17.885}, {"type": "recall_at_1", "value": 38.847}, {"type": "recall_at_10", "value": 85.76700000000001}, {"type": "recall_at_100", "value": 98.054}, {"type": "recall_at_1000", "value": 99.812}, {"type": "recall_at_3", "value": 64.82}, {"type": "recall_at_5", "value": 75.381}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 68.77}, {"type": "map_at_10", "value": 83.195}, {"type": "map_at_100", "value": 83.869}, {"type": "map_at_1000", "value": 83.883}, {"type": "map_at_3", "value": 80.04599999999999}, {"type": "map_at_5", "value": 82.011}, {"type": "mrr_at_1", "value": 79.2}, {"type": "mrr_at_10", "value": 85.942}, {"type": "mrr_at_100", "value": 86.063}, {"type": "mrr_at_1000", "value": 86.064}, {"type": "mrr_at_3", "value": 84.82}, {"type": "mrr_at_5", "value": 85.56899999999999}, {"type": "ndcg_at_1", "value": 79.17999999999999}, {"type": "ndcg_at_10", "value": 87.161}, {"type": "ndcg_at_100", "value": 88.465}, {"type": "ndcg_at_1000", "value": 88.553}, {"type": "ndcg_at_3", "value": 83.958}, {"type": "ndcg_at_5", "value": 85.699}, {"type": "precision_at_1", "value": 79.17999999999999}, {"type": "precision_at_10", "value": 13.401}, {"type": "precision_at_100", "value": 1.54}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 36.903000000000006}, {"type": "precision_at_5", "value": 24.404}, {"type": "recall_at_1", "value": 68.77}, {"type": "recall_at_10", "value": 95.132}, {"type": "recall_at_100", "value": 99.58200000000001}, {"type": "recall_at_1000", "value": 99.997}, {"type": "recall_at_3", "value": 86.119}, {"type": "recall_at_5", "value": 90.932}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 61.7204049654583}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 63.98164986883849}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.443}, {"type": "map_at_10", "value": 13.86}, {"type": "map_at_100", "value": 16.496}, {"type": "map_at_1000", "value": 16.836000000000002}, {"type": "map_at_3", "value": 9.661}, {"type": "map_at_5", "value": 11.745}, {"type": "mrr_at_1", "value": 26.8}, {"type": "mrr_at_10", "value": 37.777}, {"type": "mrr_at_100", "value": 38.928000000000004}, {"type": "mrr_at_1000", "value": 38.967}, {"type": "mrr_at_3", "value": 34.083000000000006}, {"type": "mrr_at_5", "value": 36.308}, {"type": "ndcg_at_1", "value": 26.8}, {"type": "ndcg_at_10", "value": 22.961000000000002}, {"type": "ndcg_at_100", "value": 32.582}, {"type": "ndcg_at_1000", "value": 37.972}, {"type": "ndcg_at_3", "value": 21.292}, {"type": "ndcg_at_5", "value": 18.945999999999998}, {"type": "precision_at_1", "value": 26.8}, {"type": "precision_at_10", "value": 12.06}, {"type": "precision_at_100", "value": 2.593}, {"type": "precision_at_1000", "value": 0.388}, {"type": "precision_at_3", "value": 19.900000000000002}, {"type": "precision_at_5", "value": 16.84}, {"type": "recall_at_1", "value": 5.443}, {"type": "recall_at_10", "value": 24.445}, {"type": "recall_at_100", "value": 52.602000000000004}, {"type": "recall_at_1000", "value": 78.767}, {"type": "recall_at_3", "value": 12.098}, {"type": "recall_at_5", "value": 17.077}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_spearman", "value": 83.9379272617096}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.26752176661364}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.8327309083665}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_spearman", "value": 82.9394255552954}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_spearman", "value": 88.08995363382608}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_spearman", "value": 86.53522220099619}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_spearman", "value": 89.57796559847532}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_spearman", "value": 67.66598855577894}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_spearman", "value": 88.0472708354572}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.04689157650684}, {"type": "mrr", "value": 96.51889958262507}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 62.827999999999996}, {"type": "map_at_10", "value": 73.54899999999999}, {"type": "map_at_100", "value": 73.892}, {"type": "map_at_1000", "value": 73.901}, {"type": "map_at_3", "value": 70.663}, {"type": "map_at_5", "value": 72.449}, {"type": "mrr_at_1", "value": 66.0}, {"type": "mrr_at_10", "value": 74.554}, {"type": "mrr_at_100", "value": 74.81700000000001}, {"type": "mrr_at_1000", "value": 74.82600000000001}, {"type": "mrr_at_3", "value": 72.667}, {"type": "mrr_at_5", "value": 73.717}, {"type": "ndcg_at_1", "value": 66.0}, {"type": "ndcg_at_10", "value": 78.218}, {"type": "ndcg_at_100", "value": 79.706}, {"type": "ndcg_at_1000", "value": 79.925}, {"type": "ndcg_at_3", "value": 73.629}, {"type": "ndcg_at_5", "value": 75.89}, {"type": "precision_at_1", "value": 66.0}, {"type": "precision_at_10", "value": 10.333}, {"type": "precision_at_100", "value": 1.113}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 28.889}, {"type": "precision_at_5", "value": 19.067}, {"type": "recall_at_1", "value": 62.827999999999996}, {"type": "recall_at_10", "value": 91.533}, {"type": "recall_at_100", "value": 98.333}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 79.0}, {"type": "recall_at_5", "value": 84.68900000000001}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.8019801980198}, {"type": "cos_sim_ap", "value": 95.09301057928796}, {"type": "cos_sim_f1", "value": 89.71193415637859}, {"type": "cos_sim_precision", "value": 92.37288135593221}, {"type": "cos_sim_recall", "value": 87.2}, {"type": "dot_accuracy", "value": 99.72079207920792}, {"type": "dot_ap", "value": 92.77707970155015}, {"type": "dot_f1", "value": 85.88588588588588}, {"type": "dot_precision", "value": 85.97194388777555}, {"type": "dot_recall", "value": 85.8}, {"type": "euclidean_accuracy", "value": 99.7980198019802}, {"type": "euclidean_ap", "value": 95.04124481520121}, {"type": "euclidean_f1", "value": 89.61693548387096}, {"type": "euclidean_precision", "value": 90.34552845528455}, {"type": "euclidean_recall", "value": 88.9}, {"type": "manhattan_accuracy", "value": 99.7960396039604}, {"type": "manhattan_ap", "value": 95.02691504694813}, {"type": "manhattan_f1", "value": 89.60321446509292}, {"type": "manhattan_precision", "value": 90.0100908173562}, {"type": "manhattan_recall", "value": 89.2}, {"type": "max_accuracy", "value": 99.8019801980198}, {"type": "max_ap", "value": 95.09301057928796}, {"type": "max_f1", "value": 89.71193415637859}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 72.74124969197169}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 32.262798307863996}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 54.823414217790464}, {"type": "mrr", "value": 55.557133838383834}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.01226930465494}, {"type": "cos_sim_spearman", "value": 30.9368445798007}, {"type": "dot_pearson", "value": 30.204833368654533}, {"type": "dot_spearman", "value": 30.438900411966618}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22699999999999998}, {"type": "map_at_10", "value": 2.0420000000000003}, {"type": "map_at_100", "value": 13.33}, {"type": "map_at_1000", "value": 33.627}, {"type": "map_at_3", "value": 0.639}, {"type": "map_at_5", "value": 1.056}, {"type": "mrr_at_1", "value": 84.0}, {"type": "mrr_at_10", "value": 91.167}, {"type": "mrr_at_100", "value": 91.167}, {"type": "mrr_at_1000", "value": 91.167}, {"type": "mrr_at_3", "value": 90.667}, {"type": "mrr_at_5", "value": 91.167}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 80.337}, {"type": "ndcg_at_100", "value": 65.852}, {"type": "ndcg_at_1000", "value": 59.821000000000005}, {"type": "ndcg_at_3", "value": 81.061}, {"type": "ndcg_at_5", "value": 81.396}, {"type": "precision_at_1", "value": 84.0}, {"type": "precision_at_10", "value": 85.0}, {"type": "precision_at_100", "value": 67.75999999999999}, {"type": "precision_at_1000", "value": 26.272000000000002}, {"type": "precision_at_3", "value": 85.333}, {"type": "precision_at_5", "value": 86.4}, {"type": "recall_at_1", "value": 0.22699999999999998}, {"type": "recall_at_10", "value": 2.241}, {"type": "recall_at_100", "value": 16.478}, {"type": "recall_at_1000", "value": 56.442}, {"type": "recall_at_3", "value": 0.672}, {"type": "recall_at_5", "value": 1.143}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.836}, {"type": "map_at_10", "value": 8.536000000000001}, {"type": "map_at_100", "value": 14.184}, {"type": "map_at_1000", "value": 15.885}, {"type": "map_at_3", "value": 3.7359999999999998}, {"type": "map_at_5", "value": 5.253}, {"type": "mrr_at_1", "value": 22.448999999999998}, {"type": "mrr_at_10", "value": 34.77}, {"type": "mrr_at_100", "value": 36.18}, {"type": "mrr_at_1000", "value": 36.18}, {"type": "mrr_at_3", "value": 30.612000000000002}, {"type": "mrr_at_5", "value": 32.449}, {"type": "ndcg_at_1", "value": 20.408}, {"type": "ndcg_at_10", "value": 20.498}, {"type": "ndcg_at_100", "value": 33.354}, {"type": "ndcg_at_1000", "value": 45.699}, {"type": "ndcg_at_3", "value": 19.292}, {"type": "ndcg_at_5", "value": 19.541}, {"type": "precision_at_1", "value": 22.448999999999998}, {"type": "precision_at_10", "value": 19.387999999999998}, {"type": "precision_at_100", "value": 7.163}, {"type": "precision_at_1000", "value": 1.541}, {"type": "precision_at_3", "value": 19.728}, {"type": "precision_at_5", "value": 20.0}, {"type": "recall_at_1", "value": 1.836}, {"type": "recall_at_10", "value": 15.212}, {"type": "recall_at_100", "value": 45.364}, {"type": "recall_at_1000", "value": 83.64}, {"type": "recall_at_3", "value": 4.651000000000001}, {"type": "recall_at_5", "value": 7.736}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 70.5856}, {"type": "ap", "value": 14.297836125608864}, {"type": "f1", "value": 54.45458507465688}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.89869835880024}, {"type": "f1", "value": 62.15163526419782}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 56.408998393035446}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.78822197055493}, {"type": "cos_sim_ap", "value": 81.73234934293887}, {"type": "cos_sim_f1", "value": 74.16373812312898}, {"type": "cos_sim_precision", "value": 73.18263549961469}, {"type": "cos_sim_recall", "value": 75.17150395778364}, {"type": "dot_accuracy", "value": 87.85837754068069}, {"type": "dot_ap", "value": 79.69812660365871}, {"type": "dot_f1", "value": 72.52999744702579}, {"type": "dot_precision", "value": 70.25222551928783}, {"type": "dot_recall", "value": 74.96042216358839}, {"type": "euclidean_accuracy", "value": 88.74649818203493}, {"type": "euclidean_ap", "value": 81.47777928110055}, {"type": "euclidean_f1", "value": 74.1248097412481}, {"type": "euclidean_precision", "value": 71.37274059599413}, {"type": "euclidean_recall", "value": 77.0976253298153}, {"type": "manhattan_accuracy", "value": 88.7286165583835}, {"type": "manhattan_ap", "value": 81.47766386927232}, {"type": "manhattan_f1", "value": 74.16730231375541}, {"type": "manhattan_precision", "value": 71.56526005888125}, {"type": "manhattan_recall", "value": 76.96569920844327}, {"type": "max_accuracy", "value": 88.78822197055493}, {"type": "max_ap", "value": 81.73234934293887}, {"type": "max_f1", "value": 74.16730231375541}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.30026778437536}, {"type": "cos_sim_ap", "value": 86.56353001037664}, {"type": "cos_sim_f1", "value": 79.359197907585}, {"type": "cos_sim_precision", "value": 75.12379642365887}, {"type": "cos_sim_recall", "value": 84.10070834616569}, {"type": "dot_accuracy", "value": 88.8539604921023}, {"type": "dot_ap", "value": 85.44601003294055}, {"type": "dot_f1", "value": 78.20008094484713}, {"type": "dot_precision", "value": 74.88549080403072}, {"type": "dot_recall", "value": 81.82168155220204}, {"type": "euclidean_accuracy", "value": 89.25369658865992}, {"type": "euclidean_ap", "value": 86.46965679550075}, {"type": "euclidean_f1", "value": 79.16785612332285}, {"type": "euclidean_precision", "value": 73.77627028465017}, {"type": "euclidean_recall", "value": 85.4096088697259}, {"type": "manhattan_accuracy", "value": 89.26727985407692}, {"type": "manhattan_ap", "value": 86.46460344566123}, {"type": "manhattan_f1", "value": 79.1723543358}, {"type": "manhattan_precision", "value": 74.20875420875421}, {"type": "manhattan_recall", "value": 84.84755158607946}, {"type": "max_accuracy", "value": 89.30026778437536}, {"type": "max_ap", "value": 86.56353001037664}, {"type": "max_f1", "value": 79.359197907585}]}]}]}
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised
null
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
null
2024-04-30T02:35:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4564 - F1 Score: 0.7980 - Accuracy: 0.7996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5364 | 0.92 | 200 | 0.5215 | 0.7560 | 0.7586 | | 0.4948 | 1.83 | 400 | 0.5052 | 0.7660 | 0.7686 | | 0.4846 | 2.75 | 600 | 0.4944 | 0.7759 | 0.7778 | | 0.4841 | 3.67 | 800 | 0.4906 | 0.7740 | 0.7761 | | 0.4697 | 4.59 | 1000 | 0.4808 | 0.7865 | 0.7876 | | 0.4656 | 5.5 | 1200 | 0.4815 | 0.7797 | 0.7818 | | 0.4649 | 6.42 | 1400 | 0.4822 | 0.7845 | 0.7861 | | 0.4598 | 7.34 | 1600 | 0.4864 | 0.7849 | 0.7876 | | 0.4549 | 8.26 | 1800 | 0.4851 | 0.7814 | 0.7833 | | 0.4612 | 9.17 | 2000 | 0.4770 | 0.7853 | 0.7876 | | 0.4564 | 10.09 | 2200 | 0.4957 | 0.7749 | 0.7792 | | 0.4526 | 11.01 | 2400 | 0.4733 | 0.7906 | 0.7927 | | 0.4536 | 11.93 | 2600 | 0.4669 | 0.7903 | 0.7916 | | 0.4496 | 12.84 | 2800 | 0.4735 | 0.7900 | 0.7921 | | 0.4462 | 13.76 | 3000 | 0.4792 | 0.7915 | 0.7942 | | 0.445 | 14.68 | 3200 | 0.4707 | 0.7925 | 0.7939 | | 0.4462 | 15.6 | 3400 | 0.4699 | 0.7889 | 0.7910 | | 0.4433 | 16.51 | 3600 | 0.4768 | 0.7922 | 0.7942 | | 0.4438 | 17.43 | 3800 | 0.4649 | 0.7917 | 0.7930 | | 0.4401 | 18.35 | 4000 | 0.4676 | 0.7912 | 0.7930 | | 0.4412 | 19.27 | 4200 | 0.4757 | 0.7896 | 0.7913 | | 0.4397 | 20.18 | 4400 | 0.4778 | 0.7887 | 0.7910 | | 0.435 | 21.1 | 4600 | 0.4743 | 0.7910 | 0.7927 | | 0.4381 | 22.02 | 4800 | 0.4741 | 0.7896 | 0.7913 | | 0.4369 | 22.94 | 5000 | 0.4660 | 0.7913 | 0.7933 | | 0.4355 | 23.85 | 5200 | 0.4656 | 0.7911 | 0.7927 | | 0.4326 | 24.77 | 5400 | 0.4789 | 0.7857 | 0.7884 | | 0.4347 | 25.69 | 5600 | 0.4708 | 0.7890 | 0.7910 | | 0.4317 | 26.61 | 5800 | 0.4671 | 0.7909 | 0.7924 | | 0.4329 | 27.52 | 6000 | 0.4792 | 0.7873 | 0.7899 | | 0.4342 | 28.44 | 6200 | 0.4713 | 0.7896 | 0.7913 | | 0.429 | 29.36 | 6400 | 0.4712 | 0.7887 | 0.7910 | | 0.4286 | 30.28 | 6600 | 0.4734 | 0.7878 | 0.7904 | | 0.4308 | 31.19 | 6800 | 0.4683 | 0.7929 | 0.7942 | | 0.4317 | 32.11 | 7000 | 0.4692 | 0.7884 | 0.7904 | | 0.4273 | 33.03 | 7200 | 0.4705 | 0.7895 | 0.7913 | | 0.4279 | 33.94 | 7400 | 0.4733 | 0.7875 | 0.7896 | | 0.4277 | 34.86 | 7600 | 0.4733 | 0.7864 | 0.7887 | | 0.4274 | 35.78 | 7800 | 0.4687 | 0.7930 | 0.7944 | | 0.4291 | 36.7 | 8000 | 0.4684 | 0.7884 | 0.7904 | | 0.4271 | 37.61 | 8200 | 0.4729 | 0.7865 | 0.7893 | | 0.4268 | 38.53 | 8400 | 0.4691 | 0.7895 | 0.7913 | | 0.4245 | 39.45 | 8600 | 0.4715 | 0.7859 | 0.7881 | | 0.4226 | 40.37 | 8800 | 0.4767 | 0.7884 | 0.7907 | | 0.4282 | 41.28 | 9000 | 0.4701 | 0.7897 | 0.7919 | | 0.4216 | 42.2 | 9200 | 0.4703 | 0.7880 | 0.7899 | | 0.4218 | 43.12 | 9400 | 0.4721 | 0.7883 | 0.7901 | | 0.426 | 44.04 | 9600 | 0.4703 | 0.7880 | 0.7901 | | 0.4224 | 44.95 | 9800 | 0.4726 | 0.7896 | 0.7919 | | 0.4236 | 45.87 | 10000 | 0.4713 | 0.7889 | 0.7910 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:36:00+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4559 - F1 Score: 0.8045 - Accuracy: 0.8065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5255 | 0.92 | 200 | 0.5330 | 0.7521 | 0.7566 | | 0.4811 | 1.83 | 400 | 0.4920 | 0.7744 | 0.7772 | | 0.47 | 2.75 | 600 | 0.4790 | 0.7822 | 0.7841 | | 0.4688 | 3.67 | 800 | 0.4757 | 0.7875 | 0.7893 | | 0.4537 | 4.59 | 1000 | 0.4691 | 0.7954 | 0.7967 | | 0.4473 | 5.5 | 1200 | 0.4725 | 0.7914 | 0.7936 | | 0.4478 | 6.42 | 1400 | 0.4741 | 0.7948 | 0.7959 | | 0.4406 | 7.34 | 1600 | 0.4777 | 0.7829 | 0.7861 | | 0.4343 | 8.26 | 1800 | 0.4757 | 0.7884 | 0.7904 | | 0.4389 | 9.17 | 2000 | 0.4662 | 0.7885 | 0.7910 | | 0.4343 | 10.09 | 2200 | 0.4969 | 0.7705 | 0.7758 | | 0.4285 | 11.01 | 2400 | 0.4684 | 0.7901 | 0.7921 | | 0.4279 | 11.93 | 2600 | 0.4602 | 0.7940 | 0.7947 | | 0.4246 | 12.84 | 2800 | 0.4694 | 0.7860 | 0.7887 | | 0.4196 | 13.76 | 3000 | 0.4813 | 0.7828 | 0.7864 | | 0.4161 | 14.68 | 3200 | 0.4710 | 0.7918 | 0.7939 | | 0.4141 | 15.6 | 3400 | 0.4650 | 0.7945 | 0.7959 | | 0.4138 | 16.51 | 3600 | 0.4832 | 0.7901 | 0.7927 | | 0.4107 | 17.43 | 3800 | 0.4799 | 0.7887 | 0.7916 | | 0.4075 | 18.35 | 4000 | 0.4638 | 0.7936 | 0.7953 | | 0.4062 | 19.27 | 4200 | 0.4874 | 0.7941 | 0.7962 | | 0.4037 | 20.18 | 4400 | 0.4863 | 0.7916 | 0.7936 | | 0.3987 | 21.1 | 4600 | 0.4773 | 0.7965 | 0.7976 | | 0.3985 | 22.02 | 4800 | 0.4745 | 0.7940 | 0.7956 | | 0.3972 | 22.94 | 5000 | 0.4818 | 0.7888 | 0.7919 | | 0.3948 | 23.85 | 5200 | 0.4807 | 0.7968 | 0.7987 | | 0.389 | 24.77 | 5400 | 0.4960 | 0.7899 | 0.7927 | | 0.391 | 25.69 | 5600 | 0.4787 | 0.7974 | 0.7993 | | 0.3885 | 26.61 | 5800 | 0.4725 | 0.7962 | 0.7976 | | 0.3884 | 27.52 | 6000 | 0.4987 | 0.7897 | 0.7921 | | 0.3868 | 28.44 | 6200 | 0.4780 | 0.7996 | 0.8010 | | 0.3799 | 29.36 | 6400 | 0.4758 | 0.7952 | 0.7967 | | 0.3805 | 30.28 | 6600 | 0.4910 | 0.7925 | 0.7950 | | 0.3827 | 31.19 | 6800 | 0.4769 | 0.7972 | 0.7985 | | 0.381 | 32.11 | 7000 | 0.4820 | 0.7954 | 0.7973 | | 0.3746 | 33.03 | 7200 | 0.4932 | 0.7949 | 0.7964 | | 0.3771 | 33.94 | 7400 | 0.4834 | 0.7944 | 0.7964 | | 0.3739 | 34.86 | 7600 | 0.4916 | 0.7901 | 0.7924 | | 0.3735 | 35.78 | 7800 | 0.4882 | 0.7996 | 0.8007 | | 0.3757 | 36.7 | 8000 | 0.4846 | 0.7970 | 0.7987 | | 0.3713 | 37.61 | 8200 | 0.4923 | 0.7930 | 0.7953 | | 0.3712 | 38.53 | 8400 | 0.4950 | 0.7972 | 0.7990 | | 0.3691 | 39.45 | 8600 | 0.4936 | 0.7936 | 0.7959 | | 0.3675 | 40.37 | 8800 | 0.5022 | 0.7935 | 0.7956 | | 0.37 | 41.28 | 9000 | 0.4927 | 0.7945 | 0.7964 | | 0.3662 | 42.2 | 9200 | 0.4894 | 0.7957 | 0.7976 | | 0.3663 | 43.12 | 9400 | 0.4940 | 0.7948 | 0.7964 | | 0.3676 | 44.04 | 9600 | 0.4935 | 0.7947 | 0.7967 | | 0.3665 | 44.95 | 9800 | 0.4951 | 0.7949 | 0.7970 | | 0.365 | 45.87 | 10000 | 0.4952 | 0.7950 | 0.7970 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:36:20+00:00
text-classification
transformers
{}
RyanJT/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:36:42+00:00
text-classification
transformers
{}
NiWang2024/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:36:48+00:00
null
null
This is a copy of zero123-xl model from https://zero123.cs.columbia.edu/, please refer [their spaces: https://huggingface.co/cvlab](https://huggingface.co/cvlab) for more information.
{}
kealiu/zero123-xl
null
[ "region:us" ]
null
2024-04-30T02:37:23+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA26 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3788 | 0.09 | 10 | 0.1706 | | 0.1647 | 0.18 | 20 | 0.1607 | | 0.1505 | 0.27 | 30 | 0.1637 | | 0.1577 | 0.36 | 40 | 0.1513 | | 0.1517 | 0.45 | 50 | 0.1517 | | 0.1528 | 0.54 | 60 | 0.1497 | | 0.1516 | 0.63 | 70 | 0.1478 | | 0.1492 | 0.73 | 80 | 0.1647 | | 0.1507 | 0.82 | 90 | 0.1472 | | 0.1498 | 0.91 | 100 | 0.1525 | | 0.1516 | 1.0 | 110 | 0.1518 | | 0.1484 | 1.09 | 120 | 0.1495 | | 0.1494 | 1.18 | 130 | 0.1516 | | 0.1487 | 1.27 | 140 | 0.1508 | | 0.15 | 1.36 | 150 | 0.1485 | | 0.1454 | 1.45 | 160 | 0.1474 | | 0.1458 | 1.54 | 170 | 0.1476 | | 0.1482 | 1.63 | 180 | 0.1462 | | 0.1472 | 1.72 | 190 | 0.1505 | | 0.146 | 1.81 | 200 | 0.1486 | | 0.1495 | 1.9 | 210 | 0.1498 | | 0.1471 | 1.99 | 220 | 0.1510 | | 0.1478 | 2.08 | 230 | 0.1477 | | 0.1413 | 2.18 | 240 | 0.1460 | | 0.1425 | 2.27 | 250 | 0.1473 | | 0.1432 | 2.36 | 260 | 0.1473 | | 0.1408 | 2.45 | 270 | 0.1445 | | 0.1384 | 2.54 | 280 | 0.1428 | | 0.1378 | 2.63 | 290 | 0.1420 | | 0.1396 | 2.72 | 300 | 0.1387 | | 0.1376 | 2.81 | 310 | 0.1378 | | 0.1365 | 2.9 | 320 | 0.1367 | | 0.1368 | 2.99 | 330 | 0.1367 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA26", "results": []}]}
Litzy619/O0428HMA26
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:38:51+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428HMA25 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3732 | 0.09 | 10 | 0.1777 | | 0.1625 | 0.18 | 20 | 0.1554 | | 0.1492 | 0.27 | 30 | 0.1655 | | 0.1576 | 0.36 | 40 | 0.1524 | | 0.1518 | 0.45 | 50 | 0.1576 | | 0.1514 | 0.54 | 60 | 0.1505 | | 0.1536 | 0.63 | 70 | 0.1484 | | 0.1497 | 0.73 | 80 | 0.1585 | | 0.1499 | 0.82 | 90 | 0.1483 | | 0.1498 | 0.91 | 100 | 0.1500 | | 0.1518 | 1.0 | 110 | 0.1494 | | 0.1477 | 1.09 | 120 | 0.1481 | | 0.1458 | 1.18 | 130 | 0.1525 | | 0.1472 | 1.27 | 140 | 0.1484 | | 0.1487 | 1.36 | 150 | 0.1500 | | 0.1448 | 1.45 | 160 | 0.1456 | | 0.1363 | 1.54 | 170 | 0.1287 | | 0.0851 | 1.63 | 180 | 0.0912 | | 0.152 | 1.72 | 190 | 0.1214 | | 0.1799 | 1.81 | 200 | 0.0633 | | 0.0692 | 1.9 | 210 | 0.0533 | | 0.0482 | 1.99 | 220 | 0.0345 | | 0.0448 | 2.08 | 230 | 0.0370 | | 0.0304 | 2.18 | 240 | 0.0237 | | 0.0484 | 2.27 | 250 | 0.0524 | | 0.0422 | 2.36 | 260 | 0.0289 | | 0.0264 | 2.45 | 270 | 0.0223 | | 0.0174 | 2.54 | 280 | 0.0199 | | 0.0267 | 2.63 | 290 | 0.0188 | | 0.0237 | 2.72 | 300 | 0.0185 | | 0.018 | 2.81 | 310 | 0.0179 | | 0.0219 | 2.9 | 320 | 0.0180 | | 0.0228 | 2.99 | 330 | 0.0179 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA25", "results": []}]}
Litzy619/O0428HMA25
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:38:51+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:39:19+00:00
text2text-generation
transformers
test
{}
shrms/chart_korea
null
[ "transformers", "pytorch", "pix2struct", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:39:59+00:00
null
null
{}
vorstcavry/Firmware
null
[ "region:us" ]
null
2024-04-30T02:41:39+00:00
null
peft
Used MonsterAPI for Finetuning # Model Card for eswardivi/llamathon_v1 Model is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed]
{"license": "apache-2.0", "library_name": "peft", "datasets": ["microsoft/orca-math-word-problems-200k"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
eswardivi/llamathon_v1
null
[ "peft", "safetensors", "dataset:microsoft/orca-math-word-problems-200k", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:41:48+00:00
null
transformers
# Uploaded model - **Developed by:** MilaNguyen - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
MilaNguyen/sft_summary_1
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:42:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5081 - F1 Score: 0.7989 - Accuracy: 0.8007 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5189 | 0.92 | 200 | 0.5160 | 0.7631 | 0.7663 | | 0.4705 | 1.83 | 400 | 0.4825 | 0.7808 | 0.7830 | | 0.4597 | 2.75 | 600 | 0.4726 | 0.7854 | 0.7870 | | 0.4576 | 3.67 | 800 | 0.4671 | 0.7873 | 0.7887 | | 0.4392 | 4.59 | 1000 | 0.4655 | 0.7908 | 0.7927 | | 0.4323 | 5.5 | 1200 | 0.4652 | 0.7897 | 0.7919 | | 0.4306 | 6.42 | 1400 | 0.4715 | 0.7922 | 0.7936 | | 0.4219 | 7.34 | 1600 | 0.4993 | 0.7756 | 0.7804 | | 0.4112 | 8.26 | 1800 | 0.4653 | 0.7934 | 0.7950 | | 0.414 | 9.17 | 2000 | 0.4644 | 0.7888 | 0.7913 | | 0.4047 | 10.09 | 2200 | 0.4850 | 0.7863 | 0.7899 | | 0.3971 | 11.01 | 2400 | 0.4722 | 0.7904 | 0.7919 | | 0.3902 | 11.93 | 2600 | 0.4661 | 0.7965 | 0.7970 | | 0.3828 | 12.84 | 2800 | 0.4784 | 0.7893 | 0.7919 | | 0.3766 | 13.76 | 3000 | 0.5001 | 0.7854 | 0.7887 | | 0.3686 | 14.68 | 3200 | 0.5093 | 0.7906 | 0.7933 | | 0.3576 | 15.6 | 3400 | 0.5030 | 0.7949 | 0.7970 | | 0.3589 | 16.51 | 3600 | 0.5288 | 0.7869 | 0.7907 | | 0.3511 | 17.43 | 3800 | 0.5205 | 0.7884 | 0.7916 | | 0.3449 | 18.35 | 4000 | 0.4984 | 0.7894 | 0.7904 | | 0.335 | 19.27 | 4200 | 0.5494 | 0.7889 | 0.7921 | | 0.3309 | 20.18 | 4400 | 0.5330 | 0.8007 | 0.8019 | | 0.324 | 21.1 | 4600 | 0.5325 | 0.7927 | 0.7933 | | 0.3162 | 22.02 | 4800 | 0.5123 | 0.7969 | 0.7976 | | 0.3118 | 22.94 | 5000 | 0.5269 | 0.7857 | 0.7876 | | 0.3057 | 23.85 | 5200 | 0.5393 | 0.7936 | 0.7956 | | 0.2982 | 24.77 | 5400 | 0.5480 | 0.7946 | 0.7959 | | 0.2969 | 25.69 | 5600 | 0.5749 | 0.7926 | 0.7939 | | 0.2901 | 26.61 | 5800 | 0.5522 | 0.7880 | 0.7896 | | 0.288 | 27.52 | 6000 | 0.6007 | 0.7845 | 0.7873 | | 0.284 | 28.44 | 6200 | 0.5484 | 0.7868 | 0.7884 | | 0.277 | 29.36 | 6400 | 0.5689 | 0.7852 | 0.7870 | | 0.2698 | 30.28 | 6600 | 0.6168 | 0.7842 | 0.7873 | | 0.2756 | 31.19 | 6800 | 0.5753 | 0.7870 | 0.7878 | | 0.2662 | 32.11 | 7000 | 0.6208 | 0.7857 | 0.7876 | | 0.2629 | 33.03 | 7200 | 0.5987 | 0.7879 | 0.7896 | | 0.2587 | 33.94 | 7400 | 0.6090 | 0.7861 | 0.7878 | | 0.2521 | 34.86 | 7600 | 0.6288 | 0.7790 | 0.7810 | | 0.2526 | 35.78 | 7800 | 0.6044 | 0.7897 | 0.7907 | | 0.2498 | 36.7 | 8000 | 0.6139 | 0.7806 | 0.7824 | | 0.2459 | 37.61 | 8200 | 0.6365 | 0.7844 | 0.7864 | | 0.2421 | 38.53 | 8400 | 0.6772 | 0.7825 | 0.7853 | | 0.2462 | 39.45 | 8600 | 0.6503 | 0.7889 | 0.7907 | | 0.2373 | 40.37 | 8800 | 0.6569 | 0.7867 | 0.7887 | | 0.239 | 41.28 | 9000 | 0.6492 | 0.7790 | 0.7807 | | 0.2371 | 42.2 | 9200 | 0.6445 | 0.7821 | 0.7838 | | 0.2328 | 43.12 | 9400 | 0.6469 | 0.7839 | 0.7856 | | 0.2345 | 44.04 | 9600 | 0.6582 | 0.7807 | 0.7827 | | 0.2314 | 44.95 | 9800 | 0.6627 | 0.7807 | 0.7830 | | 0.2302 | 45.87 | 10000 | 0.6613 | 0.7827 | 0.7847 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:43:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
armaniii/llama-3-8b-argument-detection
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:43:27+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kyounghyun/eeve-levware-k-240430
null
[ "transformers", "safetensors", "phi", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:43:54+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5589 - F1 Score: 0.7250 - Accuracy: 0.7259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6253 | 3.92 | 200 | 0.5778 | 0.6926 | 0.6926 | | 0.5902 | 7.84 | 400 | 0.5768 | 0.6921 | 0.6938 | | 0.5723 | 11.76 | 600 | 0.5556 | 0.7210 | 0.7210 | | 0.5572 | 15.69 | 800 | 0.5455 | 0.7308 | 0.7321 | | 0.5417 | 19.61 | 1000 | 0.5452 | 0.7346 | 0.7346 | | 0.5289 | 23.53 | 1200 | 0.5381 | 0.7325 | 0.7346 | | 0.5132 | 27.45 | 1400 | 0.5405 | 0.7281 | 0.7284 | | 0.5028 | 31.37 | 1600 | 0.5324 | 0.7294 | 0.7296 | | 0.4924 | 35.29 | 1800 | 0.5291 | 0.7321 | 0.7321 | | 0.4815 | 39.22 | 2000 | 0.5237 | 0.7284 | 0.7284 | | 0.472 | 43.14 | 2200 | 0.5350 | 0.7317 | 0.7321 | | 0.4673 | 47.06 | 2400 | 0.5240 | 0.7309 | 0.7309 | | 0.4596 | 50.98 | 2600 | 0.5353 | 0.7293 | 0.7296 | | 0.4557 | 54.9 | 2800 | 0.5245 | 0.7343 | 0.7346 | | 0.4465 | 58.82 | 3000 | 0.5219 | 0.7331 | 0.7333 | | 0.4476 | 62.75 | 3200 | 0.5298 | 0.7334 | 0.7333 | | 0.4376 | 66.67 | 3400 | 0.5273 | 0.7370 | 0.7370 | | 0.4305 | 70.59 | 3600 | 0.5242 | 0.7358 | 0.7358 | | 0.4273 | 74.51 | 3800 | 0.5299 | 0.7383 | 0.7383 | | 0.4202 | 78.43 | 4000 | 0.5254 | 0.7418 | 0.7420 | | 0.421 | 82.35 | 4200 | 0.5231 | 0.7522 | 0.7531 | | 0.4095 | 86.27 | 4400 | 0.5391 | 0.7395 | 0.7395 | | 0.4062 | 90.2 | 4600 | 0.5302 | 0.7428 | 0.7432 | | 0.4021 | 94.12 | 4800 | 0.5313 | 0.7445 | 0.7444 | | 0.3992 | 98.04 | 5000 | 0.5226 | 0.7565 | 0.7568 | | 0.3951 | 101.96 | 5200 | 0.5339 | 0.7494 | 0.7494 | | 0.3893 | 105.88 | 5400 | 0.5386 | 0.7444 | 0.7444 | | 0.3842 | 109.8 | 5600 | 0.5358 | 0.7519 | 0.7519 | | 0.3848 | 113.73 | 5800 | 0.5319 | 0.7519 | 0.7519 | | 0.3784 | 117.65 | 6000 | 0.5389 | 0.7482 | 0.7481 | | 0.373 | 121.57 | 6200 | 0.5481 | 0.7481 | 0.7481 | | 0.3738 | 125.49 | 6400 | 0.5382 | 0.7506 | 0.7506 | | 0.3641 | 129.41 | 6600 | 0.5452 | 0.7494 | 0.7494 | | 0.3638 | 133.33 | 6800 | 0.5474 | 0.7556 | 0.7556 | | 0.3581 | 137.25 | 7000 | 0.5569 | 0.7505 | 0.7506 | | 0.3558 | 141.18 | 7200 | 0.5497 | 0.7494 | 0.7494 | | 0.3538 | 145.1 | 7400 | 0.5555 | 0.7482 | 0.7481 | | 0.3533 | 149.02 | 7600 | 0.5548 | 0.7506 | 0.7506 | | 0.3481 | 152.94 | 7800 | 0.5495 | 0.7519 | 0.7519 | | 0.3476 | 156.86 | 8000 | 0.5569 | 0.7482 | 0.7481 | | 0.3453 | 160.78 | 8200 | 0.5602 | 0.7444 | 0.7444 | | 0.3439 | 164.71 | 8400 | 0.5622 | 0.7481 | 0.7481 | | 0.3433 | 168.63 | 8600 | 0.5544 | 0.7482 | 0.7481 | | 0.3376 | 172.55 | 8800 | 0.5592 | 0.7531 | 0.7531 | | 0.3405 | 176.47 | 9000 | 0.5619 | 0.7519 | 0.7519 | | 0.3299 | 180.39 | 9200 | 0.5606 | 0.7544 | 0.7543 | | 0.3387 | 184.31 | 9400 | 0.5643 | 0.7518 | 0.7519 | | 0.3341 | 188.24 | 9600 | 0.5666 | 0.7505 | 0.7506 | | 0.3358 | 192.16 | 9800 | 0.5641 | 0.7518 | 0.7519 | | 0.3335 | 196.08 | 10000 | 0.5653 | 0.7494 | 0.7494 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:44:53+00:00
sentence-similarity
peft
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.6522, 0.1891], [0.1162, 0.3457]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
{"language": ["en"], "license": "mit", "library_name": "peft", "tags": ["text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "LLM2Vec-Meta-Llama-3-unsupervised", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 75.70149253731343}, {"type": "ap", "value": 40.824269118508354}, {"type": "f1", "value": 70.55918234479084}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 80.6812}, {"type": "ap", "value": 76.63327889516552}, {"type": "f1", "value": 80.5276613226382}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 40.002}, {"type": "f1", "value": 39.67277678335084}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.173999999999996}, {"type": "map_at_10", "value": 42.548}, {"type": "map_at_100", "value": 43.492999999999995}, {"type": "map_at_1000", "value": 43.5}, {"type": "map_at_3", "value": 37.376}, {"type": "map_at_5", "value": 40.359}, {"type": "mrr_at_1", "value": 27.24}, {"type": "mrr_at_10", "value": 42.945}, {"type": "mrr_at_100", "value": 43.89}, {"type": "mrr_at_1000", "value": 43.897000000000006}, {"type": "mrr_at_3", "value": 37.779}, {"type": "mrr_at_5", "value": 40.755}, {"type": "ndcg_at_1", "value": 26.173999999999996}, {"type": "ndcg_at_10", "value": 51.731}, {"type": "ndcg_at_100", "value": 55.684999999999995}, {"type": "ndcg_at_1000", "value": 55.86}, {"type": "ndcg_at_3", "value": 41.122}, {"type": "ndcg_at_5", "value": 46.491}, {"type": "precision_at_1", "value": 26.173999999999996}, {"type": "precision_at_10", "value": 8.108}, {"type": "precision_at_100", "value": 0.9820000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 17.330000000000002}, {"type": "precision_at_5", "value": 13.001}, {"type": "recall_at_1", "value": 26.173999999999996}, {"type": "recall_at_10", "value": 81.081}, {"type": "recall_at_100", "value": 98.222}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 51.991}, {"type": "recall_at_5", "value": 65.007}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 49.215974795578546}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 41.71067780141813}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 57.15639347603191}, {"type": "mrr", "value": 71.4509959108297}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.67361609277127}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.76623376623375}, {"type": "f1", "value": 84.70041172334481}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 38.39251163108548}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 31.30501371807517}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "cqadupstack/android", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.409}, {"type": "map_at_10", "value": 36.925000000000004}, {"type": "map_at_100", "value": 38.651}, {"type": "map_at_1000", "value": 38.798}, {"type": "map_at_3", "value": 33.437}, {"type": "map_at_5", "value": 35.506}, {"type": "mrr_at_1", "value": 33.763}, {"type": "mrr_at_10", "value": 43.442}, {"type": "mrr_at_100", "value": 44.339}, {"type": "mrr_at_1000", "value": 44.391000000000005}, {"type": "mrr_at_3", "value": 40.749}, {"type": "mrr_at_5", "value": 42.408}, {"type": "ndcg_at_1", "value": 33.763}, {"type": "ndcg_at_10", "value": 43.486999999999995}, {"type": "ndcg_at_100", "value": 49.71}, {"type": "ndcg_at_1000", "value": 51.81}, {"type": "ndcg_at_3", "value": 38.586}, {"type": "ndcg_at_5", "value": 41.074}, {"type": "precision_at_1", "value": 33.763}, {"type": "precision_at_10", "value": 8.798}, {"type": "precision_at_100", "value": 1.544}, {"type": "precision_at_1000", "value": 0.21}, {"type": "precision_at_3", "value": 19.361}, {"type": "precision_at_5", "value": 14.335}, {"type": "recall_at_1", "value": 26.409}, {"type": "recall_at_10", "value": 55.352999999999994}, {"type": "recall_at_100", "value": 81.66799999999999}, {"type": "recall_at_1000", "value": 95.376}, {"type": "recall_at_3", "value": 40.304}, {"type": "recall_at_5", "value": 47.782000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "cqadupstack/english", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.6}, {"type": "map_at_10", "value": 36.42}, {"type": "map_at_100", "value": 37.628}, {"type": "map_at_1000", "value": 37.767}, {"type": "map_at_3", "value": 33.553}, {"type": "map_at_5", "value": 35.118}, {"type": "mrr_at_1", "value": 34.394999999999996}, {"type": "mrr_at_10", "value": 42.586}, {"type": "mrr_at_100", "value": 43.251}, {"type": "mrr_at_1000", "value": 43.303000000000004}, {"type": "mrr_at_3", "value": 40.297}, {"type": "mrr_at_5", "value": 41.638}, {"type": "ndcg_at_1", "value": 34.394999999999996}, {"type": "ndcg_at_10", "value": 42.05}, {"type": "ndcg_at_100", "value": 46.371}, {"type": "ndcg_at_1000", "value": 48.76}, {"type": "ndcg_at_3", "value": 37.936}, {"type": "ndcg_at_5", "value": 39.827}, {"type": "precision_at_1", "value": 34.394999999999996}, {"type": "precision_at_10", "value": 8.268}, {"type": "precision_at_100", "value": 1.355}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 18.726000000000003}, {"type": "precision_at_5", "value": 13.541}, {"type": "recall_at_1", "value": 26.6}, {"type": "recall_at_10", "value": 51.529}, {"type": "recall_at_100", "value": 70.038}, {"type": "recall_at_1000", "value": 85.67}, {"type": "recall_at_3", "value": 39.448}, {"type": "recall_at_5", "value": 44.6}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "cqadupstack/gaming", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.863000000000003}, {"type": "map_at_10", "value": 43.733}, {"type": "map_at_100", "value": 45.005}, {"type": "map_at_1000", "value": 45.074}, {"type": "map_at_3", "value": 40.593}, {"type": "map_at_5", "value": 42.272}, {"type": "mrr_at_1", "value": 37.555}, {"type": "mrr_at_10", "value": 47.532999999999994}, {"type": "mrr_at_100", "value": 48.431999999999995}, {"type": "mrr_at_1000", "value": 48.47}, {"type": "mrr_at_3", "value": 44.901}, {"type": "mrr_at_5", "value": 46.274}, {"type": "ndcg_at_1", "value": 37.555}, {"type": "ndcg_at_10", "value": 49.789}, {"type": "ndcg_at_100", "value": 55.059999999999995}, {"type": "ndcg_at_1000", "value": 56.434}, {"type": "ndcg_at_3", "value": 44.238}, {"type": "ndcg_at_5", "value": 46.698}, {"type": "precision_at_1", "value": 37.555}, {"type": "precision_at_10", "value": 8.257}, {"type": "precision_at_100", "value": 1.189}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 20.23}, {"type": "precision_at_5", "value": 13.868}, {"type": "recall_at_1", "value": 31.863000000000003}, {"type": "recall_at_10", "value": 64.188}, {"type": "recall_at_100", "value": 87.02600000000001}, {"type": "recall_at_1000", "value": 96.761}, {"type": "recall_at_3", "value": 48.986000000000004}, {"type": "recall_at_5", "value": 55.177}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "cqadupstack/gis", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 15.964}, {"type": "map_at_10", "value": 22.746}, {"type": "map_at_100", "value": 23.704}, {"type": "map_at_1000", "value": 23.82}, {"type": "map_at_3", "value": 20.5}, {"type": "map_at_5", "value": 21.836}, {"type": "mrr_at_1", "value": 17.740000000000002}, {"type": "mrr_at_10", "value": 24.634}, {"type": "mrr_at_100", "value": 25.535999999999998}, {"type": "mrr_at_1000", "value": 25.628}, {"type": "mrr_at_3", "value": 22.429}, {"type": "mrr_at_5", "value": 23.791}, {"type": "ndcg_at_1", "value": 17.740000000000002}, {"type": "ndcg_at_10", "value": 26.838}, {"type": "ndcg_at_100", "value": 31.985000000000003}, {"type": "ndcg_at_1000", "value": 35.289}, {"type": "ndcg_at_3", "value": 22.384}, {"type": "ndcg_at_5", "value": 24.726}, {"type": "precision_at_1", "value": 17.740000000000002}, {"type": "precision_at_10", "value": 4.35}, {"type": "precision_at_100", "value": 0.753}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_3", "value": 9.754999999999999}, {"type": "precision_at_5", "value": 7.164}, {"type": "recall_at_1", "value": 15.964}, {"type": "recall_at_10", "value": 37.705}, {"type": "recall_at_100", "value": 61.94499999999999}, {"type": "recall_at_1000", "value": 87.646}, {"type": "recall_at_3", "value": 25.714}, {"type": "recall_at_5", "value": 31.402}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "cqadupstack/mathematica", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.221}, {"type": "map_at_10", "value": 14.735000000000001}, {"type": "map_at_100", "value": 15.778}, {"type": "map_at_1000", "value": 15.9}, {"type": "map_at_3", "value": 12.791}, {"type": "map_at_5", "value": 13.703999999999999}, {"type": "mrr_at_1", "value": 12.438}, {"type": "mrr_at_10", "value": 18.353}, {"type": "mrr_at_100", "value": 19.285}, {"type": "mrr_at_1000", "value": 19.375}, {"type": "mrr_at_3", "value": 16.439}, {"type": "mrr_at_5", "value": 17.352999999999998}, {"type": "ndcg_at_1", "value": 12.438}, {"type": "ndcg_at_10", "value": 18.703}, {"type": "ndcg_at_100", "value": 24.104999999999997}, {"type": "ndcg_at_1000", "value": 27.366}, {"type": "ndcg_at_3", "value": 15.055}, {"type": "ndcg_at_5", "value": 16.42}, {"type": "precision_at_1", "value": 12.438}, {"type": "precision_at_10", "value": 3.818}, {"type": "precision_at_100", "value": 0.77}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 7.753}, {"type": "precision_at_5", "value": 5.622}, {"type": "recall_at_1", "value": 9.221}, {"type": "recall_at_10", "value": 27.461999999999996}, {"type": "recall_at_100", "value": 51.909000000000006}, {"type": "recall_at_1000", "value": 75.56}, {"type": "recall_at_3", "value": 17.046}, {"type": "recall_at_5", "value": 20.766000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "cqadupstack/physics", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.828}, {"type": "map_at_10", "value": 33.166000000000004}, {"type": "map_at_100", "value": 34.618}, {"type": "map_at_1000", "value": 34.744}, {"type": "map_at_3", "value": 29.737000000000002}, {"type": "map_at_5", "value": 31.541000000000004}, {"type": "mrr_at_1", "value": 29.548000000000002}, {"type": "mrr_at_10", "value": 38.582}, {"type": "mrr_at_100", "value": 39.527}, {"type": "mrr_at_1000", "value": 39.577}, {"type": "mrr_at_3", "value": 35.884}, {"type": "mrr_at_5", "value": 37.413999999999994}, {"type": "ndcg_at_1", "value": 29.548000000000002}, {"type": "ndcg_at_10", "value": 39.397}, {"type": "ndcg_at_100", "value": 45.584}, {"type": "ndcg_at_1000", "value": 47.823}, {"type": "ndcg_at_3", "value": 33.717000000000006}, {"type": "ndcg_at_5", "value": 36.223}, {"type": "precision_at_1", "value": 29.548000000000002}, {"type": "precision_at_10", "value": 7.767}, {"type": "precision_at_100", "value": 1.2959999999999998}, {"type": "precision_at_1000", "value": 0.17099999999999999}, {"type": "precision_at_3", "value": 16.747}, {"type": "precision_at_5", "value": 12.203999999999999}, {"type": "recall_at_1", "value": 22.828}, {"type": "recall_at_10", "value": 52.583999999999996}, {"type": "recall_at_100", "value": 79.06400000000001}, {"type": "recall_at_1000", "value": 93.59100000000001}, {"type": "recall_at_3", "value": 36.671}, {"type": "recall_at_5", "value": 43.22}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "cqadupstack/programmers", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.366}, {"type": "map_at_10", "value": 30.214000000000002}, {"type": "map_at_100", "value": 31.647}, {"type": "map_at_1000", "value": 31.763}, {"type": "map_at_3", "value": 27.234}, {"type": "map_at_5", "value": 28.801}, {"type": "mrr_at_1", "value": 26.256}, {"type": "mrr_at_10", "value": 35.299}, {"type": "mrr_at_100", "value": 36.284}, {"type": "mrr_at_1000", "value": 36.342}, {"type": "mrr_at_3", "value": 32.572}, {"type": "mrr_at_5", "value": 34.050999999999995}, {"type": "ndcg_at_1", "value": 26.256}, {"type": "ndcg_at_10", "value": 35.899}, {"type": "ndcg_at_100", "value": 41.983}, {"type": "ndcg_at_1000", "value": 44.481}, {"type": "ndcg_at_3", "value": 30.665}, {"type": "ndcg_at_5", "value": 32.879999999999995}, {"type": "precision_at_1", "value": 26.256}, {"type": "precision_at_10", "value": 6.804}, {"type": "precision_at_100", "value": 1.187}, {"type": "precision_at_1000", "value": 0.16}, {"type": "precision_at_3", "value": 14.84}, {"type": "precision_at_5", "value": 10.708}, {"type": "recall_at_1", "value": 21.366}, {"type": "recall_at_10", "value": 47.878}, {"type": "recall_at_100", "value": 73.245}, {"type": "recall_at_1000", "value": 90.623}, {"type": "recall_at_3", "value": 33.341}, {"type": "recall_at_5", "value": 39.198}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "mteb/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.477166666666665}, {"type": "map_at_10", "value": 27.431416666666664}, {"type": "map_at_100", "value": 28.656000000000002}, {"type": "map_at_1000", "value": 28.787583333333338}, {"type": "map_at_3", "value": 24.85175}, {"type": "map_at_5", "value": 26.270166666666668}, {"type": "mrr_at_1", "value": 24.06841666666667}, {"type": "mrr_at_10", "value": 31.620000000000005}, {"type": "mrr_at_100", "value": 32.52283333333333}, {"type": "mrr_at_1000", "value": 32.59441666666667}, {"type": "mrr_at_3", "value": 29.328666666666663}, {"type": "mrr_at_5", "value": 30.620416666666667}, {"type": "ndcg_at_1", "value": 24.06841666666667}, {"type": "ndcg_at_10", "value": 32.404583333333335}, {"type": "ndcg_at_100", "value": 37.779500000000006}, {"type": "ndcg_at_1000", "value": 40.511583333333334}, {"type": "ndcg_at_3", "value": 27.994166666666665}, {"type": "ndcg_at_5", "value": 30.021749999999997}, {"type": "precision_at_1", "value": 24.06841666666667}, {"type": "precision_at_10", "value": 6.03725}, {"type": "precision_at_100", "value": 1.0500833333333337}, {"type": "precision_at_1000", "value": 0.14875000000000002}, {"type": "precision_at_3", "value": 13.419583333333335}, {"type": "precision_at_5", "value": 9.700666666666665}, {"type": "recall_at_1", "value": 19.477166666666665}, {"type": "recall_at_10", "value": 42.99441666666667}, {"type": "recall_at_100", "value": 66.787}, {"type": "recall_at_1000", "value": 86.18825000000001}, {"type": "recall_at_3", "value": 30.46366666666667}, {"type": "recall_at_5", "value": 35.83141666666667}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "cqadupstack/stats", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 16.246}, {"type": "map_at_10", "value": 22.127}, {"type": "map_at_100", "value": 23.006}, {"type": "map_at_1000", "value": 23.125}, {"type": "map_at_3", "value": 20.308999999999997}, {"type": "map_at_5", "value": 21.139}, {"type": "mrr_at_1", "value": 19.631999999999998}, {"type": "mrr_at_10", "value": 24.884999999999998}, {"type": "mrr_at_100", "value": 25.704}, {"type": "mrr_at_1000", "value": 25.793}, {"type": "mrr_at_3", "value": 23.083000000000002}, {"type": "mrr_at_5", "value": 23.942}, {"type": "ndcg_at_1", "value": 19.631999999999998}, {"type": "ndcg_at_10", "value": 25.862000000000002}, {"type": "ndcg_at_100", "value": 30.436000000000003}, {"type": "ndcg_at_1000", "value": 33.638}, {"type": "ndcg_at_3", "value": 22.431}, {"type": "ndcg_at_5", "value": 23.677}, {"type": "precision_at_1", "value": 19.631999999999998}, {"type": "precision_at_10", "value": 4.417}, {"type": "precision_at_100", "value": 0.7270000000000001}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 10.327}, {"type": "precision_at_5", "value": 7.147}, {"type": "recall_at_1", "value": 16.246}, {"type": "recall_at_10", "value": 34.869}, {"type": "recall_at_100", "value": 56.221}, {"type": "recall_at_1000", "value": 80.449}, {"type": "recall_at_3", "value": 24.83}, {"type": "recall_at_5", "value": 28.142}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "cqadupstack/tex", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.798}, {"type": "map_at_10", "value": 14.695}, {"type": "map_at_100", "value": 15.590000000000002}, {"type": "map_at_1000", "value": 15.726999999999999}, {"type": "map_at_3", "value": 13.004999999999999}, {"type": "map_at_5", "value": 13.861}, {"type": "mrr_at_1", "value": 12.939}, {"type": "mrr_at_10", "value": 18.218}, {"type": "mrr_at_100", "value": 18.998}, {"type": "mrr_at_1000", "value": 19.093}, {"type": "mrr_at_3", "value": 16.454}, {"type": "mrr_at_5", "value": 17.354}, {"type": "ndcg_at_1", "value": 12.939}, {"type": "ndcg_at_10", "value": 18.278}, {"type": "ndcg_at_100", "value": 22.709}, {"type": "ndcg_at_1000", "value": 26.064}, {"type": "ndcg_at_3", "value": 15.204}, {"type": "ndcg_at_5", "value": 16.416}, {"type": "precision_at_1", "value": 12.939}, {"type": "precision_at_10", "value": 3.768}, {"type": "precision_at_100", "value": 0.724}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 7.707999999999999}, {"type": "precision_at_5", "value": 5.733}, {"type": "recall_at_1", "value": 9.798}, {"type": "recall_at_10", "value": 25.562}, {"type": "recall_at_100", "value": 45.678999999999995}, {"type": "recall_at_1000", "value": 69.963}, {"type": "recall_at_3", "value": 16.705000000000002}, {"type": "recall_at_5", "value": 19.969}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "cqadupstack/unix", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.1}, {"type": "map_at_10", "value": 27.034999999999997}, {"type": "map_at_100", "value": 28.396}, {"type": "map_at_1000", "value": 28.518}, {"type": "map_at_3", "value": 24.363}, {"type": "map_at_5", "value": 25.826999999999998}, {"type": "mrr_at_1", "value": 23.694000000000003}, {"type": "mrr_at_10", "value": 31.724999999999998}, {"type": "mrr_at_100", "value": 32.743}, {"type": "mrr_at_1000", "value": 32.82}, {"type": "mrr_at_3", "value": 29.275000000000002}, {"type": "mrr_at_5", "value": 30.684}, {"type": "ndcg_at_1", "value": 23.694000000000003}, {"type": "ndcg_at_10", "value": 32.366}, {"type": "ndcg_at_100", "value": 38.241}, {"type": "ndcg_at_1000", "value": 40.973}, {"type": "ndcg_at_3", "value": 27.661}, {"type": "ndcg_at_5", "value": 29.782999999999998}, {"type": "precision_at_1", "value": 23.694000000000003}, {"type": "precision_at_10", "value": 5.951}, {"type": "precision_at_100", "value": 1.0070000000000001}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 13.34}, {"type": "precision_at_5", "value": 9.533999999999999}, {"type": "recall_at_1", "value": 19.1}, {"type": "recall_at_10", "value": 44.032}, {"type": "recall_at_100", "value": 69.186}, {"type": "recall_at_1000", "value": 88.562}, {"type": "recall_at_3", "value": 30.712}, {"type": "recall_at_5", "value": 36.372}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "cqadupstack/webmasters", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.671}, {"type": "map_at_10", "value": 28.583}, {"type": "map_at_100", "value": 30.098999999999997}, {"type": "map_at_1000", "value": 30.364}, {"type": "map_at_3", "value": 25.825}, {"type": "map_at_5", "value": 27.500999999999998}, {"type": "mrr_at_1", "value": 25.889}, {"type": "mrr_at_10", "value": 33.617999999999995}, {"type": "mrr_at_100", "value": 34.687}, {"type": "mrr_at_1000", "value": 34.774}, {"type": "mrr_at_3", "value": 31.191999999999997}, {"type": "mrr_at_5", "value": 32.675}, {"type": "ndcg_at_1", "value": 25.889}, {"type": "ndcg_at_10", "value": 34.056999999999995}, {"type": "ndcg_at_100", "value": 40.142}, {"type": "ndcg_at_1000", "value": 43.614000000000004}, {"type": "ndcg_at_3", "value": 29.688}, {"type": "ndcg_at_5", "value": 32.057}, {"type": "precision_at_1", "value": 25.889}, {"type": "precision_at_10", "value": 6.7}, {"type": "precision_at_100", "value": 1.417}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 14.360999999999999}, {"type": "precision_at_5", "value": 10.711}, {"type": "recall_at_1", "value": 20.671}, {"type": "recall_at_10", "value": 43.97}, {"type": "recall_at_100", "value": 71.83699999999999}, {"type": "recall_at_1000", "value": 94.42399999999999}, {"type": "recall_at_3", "value": 31.0}, {"type": "recall_at_5", "value": 37.489}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "cqadupstack/wordpress", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.66}, {"type": "map_at_10", "value": 18.798000000000002}, {"type": "map_at_100", "value": 19.75}, {"type": "map_at_1000", "value": 19.851}, {"type": "map_at_3", "value": 16.874}, {"type": "map_at_5", "value": 18.136}, {"type": "mrr_at_1", "value": 14.972}, {"type": "mrr_at_10", "value": 20.565}, {"type": "mrr_at_100", "value": 21.488}, {"type": "mrr_at_1000", "value": 21.567}, {"type": "mrr_at_3", "value": 18.669}, {"type": "mrr_at_5", "value": 19.861}, {"type": "ndcg_at_1", "value": 14.972}, {"type": "ndcg_at_10", "value": 22.128999999999998}, {"type": "ndcg_at_100", "value": 27.028000000000002}, {"type": "ndcg_at_1000", "value": 29.887000000000004}, {"type": "ndcg_at_3", "value": 18.365000000000002}, {"type": "ndcg_at_5", "value": 20.48}, {"type": "precision_at_1", "value": 14.972}, {"type": "precision_at_10", "value": 3.549}, {"type": "precision_at_100", "value": 0.632}, {"type": "precision_at_1000", "value": 0.093}, {"type": "precision_at_3", "value": 7.887}, {"type": "precision_at_5", "value": 5.840999999999999}, {"type": "recall_at_1", "value": 13.66}, {"type": "recall_at_10", "value": 30.801000000000002}, {"type": "recall_at_100", "value": 53.626}, {"type": "recall_at_1000", "value": 75.634}, {"type": "recall_at_3", "value": 20.807000000000002}, {"type": "recall_at_5", "value": 25.86}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.622}, {"type": "map_at_10", "value": 16.042}, {"type": "map_at_100", "value": 18.023}, {"type": "map_at_1000", "value": 18.228}, {"type": "map_at_3", "value": 12.995999999999999}, {"type": "map_at_5", "value": 14.424000000000001}, {"type": "mrr_at_1", "value": 18.892999999999997}, {"type": "mrr_at_10", "value": 30.575000000000003}, {"type": "mrr_at_100", "value": 31.814999999999998}, {"type": "mrr_at_1000", "value": 31.856}, {"type": "mrr_at_3", "value": 26.851000000000003}, {"type": "mrr_at_5", "value": 29.021}, {"type": "ndcg_at_1", "value": 18.892999999999997}, {"type": "ndcg_at_10", "value": 23.575}, {"type": "ndcg_at_100", "value": 31.713}, {"type": "ndcg_at_1000", "value": 35.465}, {"type": "ndcg_at_3", "value": 18.167}, {"type": "ndcg_at_5", "value": 20.071}, {"type": "precision_at_1", "value": 18.892999999999997}, {"type": "precision_at_10", "value": 7.883}, {"type": "precision_at_100", "value": 1.652}, {"type": "precision_at_1000", "value": 0.23500000000000001}, {"type": "precision_at_3", "value": 13.898}, {"type": "precision_at_5", "value": 11.14}, {"type": "recall_at_1", "value": 8.622}, {"type": "recall_at_10", "value": 30.044999999999998}, {"type": "recall_at_100", "value": 58.072}, {"type": "recall_at_1000", "value": 79.226}, {"type": "recall_at_3", "value": 17.21}, {"type": "recall_at_5", "value": 22.249}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.845}, {"type": "map_at_10", "value": 12.352}, {"type": "map_at_100", "value": 17.423}, {"type": "map_at_1000", "value": 18.529}, {"type": "map_at_3", "value": 8.505}, {"type": "map_at_5", "value": 10.213}, {"type": "mrr_at_1", "value": 41.75}, {"type": "mrr_at_10", "value": 54.6}, {"type": "mrr_at_100", "value": 55.345}, {"type": "mrr_at_1000", "value": 55.374}, {"type": "mrr_at_3", "value": 52.37500000000001}, {"type": "mrr_at_5", "value": 53.87499999999999}, {"type": "ndcg_at_1", "value": 31.25}, {"type": "ndcg_at_10", "value": 26.779999999999998}, {"type": "ndcg_at_100", "value": 31.929000000000002}, {"type": "ndcg_at_1000", "value": 39.290000000000006}, {"type": "ndcg_at_3", "value": 28.746}, {"type": "ndcg_at_5", "value": 27.334999999999997}, {"type": "precision_at_1", "value": 41.75}, {"type": "precision_at_10", "value": 22.55}, {"type": "precision_at_100", "value": 7.242}, {"type": "precision_at_1000", "value": 1.439}, {"type": "precision_at_3", "value": 33.833}, {"type": "precision_at_5", "value": 28.65}, {"type": "recall_at_1", "value": 4.845}, {"type": "recall_at_10", "value": 18.664}, {"type": "recall_at_100", "value": 41.085}, {"type": "recall_at_1000", "value": 65.242}, {"type": "recall_at_3", "value": 10.572}, {"type": "recall_at_5", "value": 13.961000000000002}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 47.08}, {"type": "f1", "value": 42.843345856303756}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 33.743}, {"type": "map_at_10", "value": 46.521}, {"type": "map_at_100", "value": 47.235}, {"type": "map_at_1000", "value": 47.272}, {"type": "map_at_3", "value": 43.252}, {"type": "map_at_5", "value": 45.267}, {"type": "mrr_at_1", "value": 36.484}, {"type": "mrr_at_10", "value": 49.406}, {"type": "mrr_at_100", "value": 50.03300000000001}, {"type": "mrr_at_1000", "value": 50.058}, {"type": "mrr_at_3", "value": 46.195}, {"type": "mrr_at_5", "value": 48.193999999999996}, {"type": "ndcg_at_1", "value": 36.484}, {"type": "ndcg_at_10", "value": 53.42}, {"type": "ndcg_at_100", "value": 56.69499999999999}, {"type": "ndcg_at_1000", "value": 57.623999999999995}, {"type": "ndcg_at_3", "value": 47.010999999999996}, {"type": "ndcg_at_5", "value": 50.524}, {"type": "precision_at_1", "value": 36.484}, {"type": "precision_at_10", "value": 7.925}, {"type": "precision_at_100", "value": 0.975}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 19.967}, {"type": "precision_at_5", "value": 13.87}, {"type": "recall_at_1", "value": 33.743}, {"type": "recall_at_10", "value": 71.988}, {"type": "recall_at_100", "value": 86.60799999999999}, {"type": "recall_at_1000", "value": 93.54}, {"type": "recall_at_3", "value": 54.855}, {"type": "recall_at_5", "value": 63.341}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.003}, {"type": "map_at_10", "value": 21.766}, {"type": "map_at_100", "value": 23.618}, {"type": "map_at_1000", "value": 23.832}, {"type": "map_at_3", "value": 18.282999999999998}, {"type": "map_at_5", "value": 20.267}, {"type": "mrr_at_1", "value": 26.851999999999997}, {"type": "mrr_at_10", "value": 34.658}, {"type": "mrr_at_100", "value": 35.729}, {"type": "mrr_at_1000", "value": 35.785}, {"type": "mrr_at_3", "value": 31.686999999999998}, {"type": "mrr_at_5", "value": 33.315}, {"type": "ndcg_at_1", "value": 26.851999999999997}, {"type": "ndcg_at_10", "value": 28.563}, {"type": "ndcg_at_100", "value": 36.374}, {"type": "ndcg_at_1000", "value": 40.306999999999995}, {"type": "ndcg_at_3", "value": 24.224}, {"type": "ndcg_at_5", "value": 25.939}, {"type": "precision_at_1", "value": 26.851999999999997}, {"type": "precision_at_10", "value": 8.193999999999999}, {"type": "precision_at_100", "value": 1.616}, {"type": "precision_at_1000", "value": 0.232}, {"type": "precision_at_3", "value": 16.255}, {"type": "precision_at_5", "value": 12.469}, {"type": "recall_at_1", "value": 13.003}, {"type": "recall_at_10", "value": 35.689}, {"type": "recall_at_100", "value": 65.762}, {"type": "recall_at_1000", "value": 89.546}, {"type": "recall_at_3", "value": 21.820999999999998}, {"type": "recall_at_5", "value": 28.097}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.541}, {"type": "map_at_10", "value": 43.088}, {"type": "map_at_100", "value": 44.252}, {"type": "map_at_1000", "value": 44.345}, {"type": "map_at_3", "value": 39.79}, {"type": "map_at_5", "value": 41.687000000000005}, {"type": "mrr_at_1", "value": 59.082}, {"type": "mrr_at_10", "value": 67.27300000000001}, {"type": "mrr_at_100", "value": 67.708}, {"type": "mrr_at_1000", "value": 67.731}, {"type": "mrr_at_3", "value": 65.526}, {"type": "mrr_at_5", "value": 66.589}, {"type": "ndcg_at_1", "value": 59.082}, {"type": "ndcg_at_10", "value": 52.372}, {"type": "ndcg_at_100", "value": 56.725}, {"type": "ndcg_at_1000", "value": 58.665}, {"type": "ndcg_at_3", "value": 47.129}, {"type": "ndcg_at_5", "value": 49.808}, {"type": "precision_at_1", "value": 59.082}, {"type": "precision_at_10", "value": 11.275}, {"type": "precision_at_100", "value": 1.469}, {"type": "precision_at_1000", "value": 0.173}, {"type": "precision_at_3", "value": 29.773}, {"type": "precision_at_5", "value": 19.980999999999998}, {"type": "recall_at_1", "value": 29.541}, {"type": "recall_at_10", "value": 56.374}, {"type": "recall_at_100", "value": 73.42999999999999}, {"type": "recall_at_1000", "value": 86.28}, {"type": "recall_at_3", "value": 44.659}, {"type": "recall_at_5", "value": 49.952999999999996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 75.1904}, {"type": "ap", "value": 69.80555086826531}, {"type": "f1", "value": 74.93725389065787}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.085}, {"type": "map_at_10", "value": 13.344000000000001}, {"type": "map_at_100", "value": 14.501}, {"type": "map_at_1000", "value": 14.605}, {"type": "map_at_3", "value": 10.758}, {"type": "map_at_5", "value": 12.162}, {"type": "mrr_at_1", "value": 7.278}, {"type": "mrr_at_10", "value": 13.607}, {"type": "mrr_at_100", "value": 14.761}, {"type": "mrr_at_1000", "value": 14.860000000000001}, {"type": "mrr_at_3", "value": 11.003}, {"type": "mrr_at_5", "value": 12.421}, {"type": "ndcg_at_1", "value": 7.278}, {"type": "ndcg_at_10", "value": 17.473}, {"type": "ndcg_at_100", "value": 23.721}, {"type": "ndcg_at_1000", "value": 26.69}, {"type": "ndcg_at_3", "value": 12.078}, {"type": "ndcg_at_5", "value": 14.62}, {"type": "precision_at_1", "value": 7.278}, {"type": "precision_at_10", "value": 3.175}, {"type": "precision_at_100", "value": 0.639}, {"type": "precision_at_1000", "value": 0.09}, {"type": "precision_at_3", "value": 5.382}, {"type": "precision_at_5", "value": 4.519}, {"type": "recall_at_1", "value": 7.085}, {"type": "recall_at_10", "value": 30.549}, {"type": "recall_at_100", "value": 60.919999999999995}, {"type": "recall_at_1000", "value": 84.372}, {"type": "recall_at_3", "value": 15.675}, {"type": "recall_at_5", "value": 21.818}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 94.46876424988601}, {"type": "f1", "value": 94.23159241922738}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 81.0875512995896}, {"type": "f1", "value": 61.674961674414}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 75.01344989912575}, {"type": "f1", "value": 71.7942527839921}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 79.15601882985877}, {"type": "f1", "value": 78.82502954601195}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 31.468806971345227}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 27.874332804382256}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.099340785595842}, {"type": "mrr", "value": 31.077367694660257}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 3.9050000000000002}, {"type": "map_at_10", "value": 8.931000000000001}, {"type": "map_at_100", "value": 11.246}, {"type": "map_at_1000", "value": 12.579}, {"type": "map_at_3", "value": 6.544}, {"type": "map_at_5", "value": 7.854}, {"type": "mrr_at_1", "value": 33.745999999999995}, {"type": "mrr_at_10", "value": 44.734}, {"type": "mrr_at_100", "value": 45.486}, {"type": "mrr_at_1000", "value": 45.534}, {"type": "mrr_at_3", "value": 42.157}, {"type": "mrr_at_5", "value": 43.813}, {"type": "ndcg_at_1", "value": 31.734}, {"type": "ndcg_at_10", "value": 26.284999999999997}, {"type": "ndcg_at_100", "value": 25.211}, {"type": "ndcg_at_1000", "value": 34.974}, {"type": "ndcg_at_3", "value": 29.918}, {"type": "ndcg_at_5", "value": 29.066}, {"type": "precision_at_1", "value": 33.745999999999995}, {"type": "precision_at_10", "value": 19.628}, {"type": "precision_at_100", "value": 6.476999999999999}, {"type": "precision_at_1000", "value": 1.976}, {"type": "precision_at_3", "value": 28.793000000000003}, {"type": "precision_at_5", "value": 25.759}, {"type": "recall_at_1", "value": 3.9050000000000002}, {"type": "recall_at_10", "value": 13.375}, {"type": "recall_at_100", "value": 28.453}, {"type": "recall_at_1000", "value": 61.67399999999999}, {"type": "recall_at_3", "value": 7.774}, {"type": "recall_at_5", "value": 10.754}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 18.33}, {"type": "map_at_10", "value": 30.44}, {"type": "map_at_100", "value": 31.848}, {"type": "map_at_1000", "value": 31.906000000000002}, {"type": "map_at_3", "value": 26.143}, {"type": "map_at_5", "value": 28.583}, {"type": "mrr_at_1", "value": 21.031}, {"type": "mrr_at_10", "value": 33.028}, {"type": "mrr_at_100", "value": 34.166000000000004}, {"type": "mrr_at_1000", "value": 34.208}, {"type": "mrr_at_3", "value": 29.089}, {"type": "mrr_at_5", "value": 31.362000000000002}, {"type": "ndcg_at_1", "value": 21.031}, {"type": "ndcg_at_10", "value": 37.65}, {"type": "ndcg_at_100", "value": 43.945}, {"type": "ndcg_at_1000", "value": 45.338}, {"type": "ndcg_at_3", "value": 29.256999999999998}, {"type": "ndcg_at_5", "value": 33.453}, {"type": "precision_at_1", "value": 21.031}, {"type": "precision_at_10", "value": 6.8309999999999995}, {"type": "precision_at_100", "value": 1.035}, {"type": "precision_at_1000", "value": 0.117}, {"type": "precision_at_3", "value": 13.818}, {"type": "precision_at_5", "value": 10.649000000000001}, {"type": "recall_at_1", "value": 18.33}, {"type": "recall_at_10", "value": 57.330999999999996}, {"type": "recall_at_100", "value": 85.284}, {"type": "recall_at_1000", "value": 95.676}, {"type": "recall_at_3", "value": 35.356}, {"type": "recall_at_5", "value": 45.073}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 66.373}, {"type": "map_at_10", "value": 80.233}, {"type": "map_at_100", "value": 80.973}, {"type": "map_at_1000", "value": 80.99499999999999}, {"type": "map_at_3", "value": 77.127}, {"type": "map_at_5", "value": 79.056}, {"type": "mrr_at_1", "value": 76.55}, {"type": "mrr_at_10", "value": 83.813}, {"type": "mrr_at_100", "value": 83.96900000000001}, {"type": "mrr_at_1000", "value": 83.97200000000001}, {"type": "mrr_at_3", "value": 82.547}, {"type": "mrr_at_5", "value": 83.38600000000001}, {"type": "ndcg_at_1", "value": 76.53999999999999}, {"type": "ndcg_at_10", "value": 84.638}, {"type": "ndcg_at_100", "value": 86.28099999999999}, {"type": "ndcg_at_1000", "value": 86.459}, {"type": "ndcg_at_3", "value": 81.19}, {"type": "ndcg_at_5", "value": 83.057}, {"type": "precision_at_1", "value": 76.53999999999999}, {"type": "precision_at_10", "value": 12.928999999999998}, {"type": "precision_at_100", "value": 1.514}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.503}, {"type": "precision_at_5", "value": 23.512}, {"type": "recall_at_1", "value": 66.373}, {"type": "recall_at_10", "value": 93.273}, {"type": "recall_at_100", "value": 99.031}, {"type": "recall_at_1000", "value": 99.91799999999999}, {"type": "recall_at_3", "value": 83.55799999999999}, {"type": "recall_at_5", "value": 88.644}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 43.67174666339103}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 61.66838659211271}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.318}, {"type": "map_at_10", "value": 5.938000000000001}, {"type": "map_at_100", "value": 7.582}, {"type": "map_at_1000", "value": 7.936}, {"type": "map_at_3", "value": 4.208}, {"type": "map_at_5", "value": 5.098}, {"type": "mrr_at_1", "value": 11.4}, {"type": "mrr_at_10", "value": 17.655}, {"type": "mrr_at_100", "value": 19.088}, {"type": "mrr_at_1000", "value": 19.203}, {"type": "mrr_at_3", "value": 15.25}, {"type": "mrr_at_5", "value": 16.535}, {"type": "ndcg_at_1", "value": 11.4}, {"type": "ndcg_at_10", "value": 10.388}, {"type": "ndcg_at_100", "value": 18.165}, {"type": "ndcg_at_1000", "value": 24.842}, {"type": "ndcg_at_3", "value": 9.414}, {"type": "ndcg_at_5", "value": 8.453}, {"type": "precision_at_1", "value": 11.4}, {"type": "precision_at_10", "value": 5.54}, {"type": "precision_at_100", "value": 1.71}, {"type": "precision_at_1000", "value": 0.33}, {"type": "precision_at_3", "value": 8.866999999999999}, {"type": "precision_at_5", "value": 7.580000000000001}, {"type": "recall_at_1", "value": 2.318}, {"type": "recall_at_10", "value": 11.267000000000001}, {"type": "recall_at_100", "value": 34.743}, {"type": "recall_at_1000", "value": 67.07300000000001}, {"type": "recall_at_3", "value": 5.408}, {"type": "recall_at_5", "value": 7.713}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_spearman", "value": 72.15850185456762}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_spearman", "value": 61.59518395985063}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.71131323749228}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_spearman", "value": 72.10974664733891}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_spearman", "value": 82.17899407125657}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.41138579273438}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_spearman", "value": 85.44343473477939}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_spearman", "value": 63.90264271389905}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_spearman", "value": 77.44151296326804}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 76.27597486396654}, {"type": "mrr", "value": 93.28127119793788}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 49.594}, {"type": "map_at_10", "value": 60.951}, {"type": "map_at_100", "value": 61.68599999999999}, {"type": "map_at_1000", "value": 61.712}, {"type": "map_at_3", "value": 57.946}, {"type": "map_at_5", "value": 59.89}, {"type": "mrr_at_1", "value": 52.666999999999994}, {"type": "mrr_at_10", "value": 62.724000000000004}, {"type": "mrr_at_100", "value": 63.269}, {"type": "mrr_at_1000", "value": 63.291}, {"type": "mrr_at_3", "value": 60.167}, {"type": "mrr_at_5", "value": 61.95}, {"type": "ndcg_at_1", "value": 52.666999999999994}, {"type": "ndcg_at_10", "value": 66.35600000000001}, {"type": "ndcg_at_100", "value": 69.463}, {"type": "ndcg_at_1000", "value": 70.111}, {"type": "ndcg_at_3", "value": 60.901}, {"type": "ndcg_at_5", "value": 64.054}, {"type": "precision_at_1", "value": 52.666999999999994}, {"type": "precision_at_10", "value": 9.0}, {"type": "precision_at_100", "value": 1.073}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 24.221999999999998}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 49.594}, {"type": "recall_at_10", "value": 81.256}, {"type": "recall_at_100", "value": 94.989}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 66.706}, {"type": "recall_at_5", "value": 74.411}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.65049504950495}, {"type": "cos_sim_ap", "value": 88.1421623503371}, {"type": "cos_sim_f1", "value": 81.44072036018008}, {"type": "cos_sim_precision", "value": 81.48148148148148}, {"type": "cos_sim_recall", "value": 81.39999999999999}, {"type": "dot_accuracy", "value": 99.37623762376238}, {"type": "dot_ap", "value": 69.87152032240303}, {"type": "dot_f1", "value": 65.64885496183206}, {"type": "dot_precision", "value": 72.18225419664267}, {"type": "dot_recall", "value": 60.199999999999996}, {"type": "euclidean_accuracy", "value": 99.63069306930693}, {"type": "euclidean_ap", "value": 86.13858297902517}, {"type": "euclidean_f1", "value": 79.87679671457904}, {"type": "euclidean_precision", "value": 82.0675105485232}, {"type": "euclidean_recall", "value": 77.8}, {"type": "manhattan_accuracy", "value": 99.63168316831683}, {"type": "manhattan_ap", "value": 86.31976532265482}, {"type": "manhattan_f1", "value": 80.10204081632654}, {"type": "manhattan_precision", "value": 81.77083333333334}, {"type": "manhattan_recall", "value": 78.5}, {"type": "max_accuracy", "value": 99.65049504950495}, {"type": "max_ap", "value": 88.1421623503371}, {"type": "max_f1", "value": 81.44072036018008}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 68.19604139959692}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 36.3569584557381}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 48.82174503355024}, {"type": "mrr", "value": 49.610933388506915}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.805895993742798}, {"type": "cos_sim_spearman", "value": 31.445431226826738}, {"type": "dot_pearson", "value": 24.441585432516867}, {"type": "dot_spearman", "value": 25.468117334810188}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.2}, {"type": "map_at_10", "value": 1.431}, {"type": "map_at_100", "value": 7.138999999999999}, {"type": "map_at_1000", "value": 17.933}, {"type": "map_at_3", "value": 0.551}, {"type": "map_at_5", "value": 0.7979999999999999}, {"type": "mrr_at_1", "value": 76.0}, {"type": "mrr_at_10", "value": 85.167}, {"type": "mrr_at_100", "value": 85.21300000000001}, {"type": "mrr_at_1000", "value": 85.21300000000001}, {"type": "mrr_at_3", "value": 84.667}, {"type": "mrr_at_5", "value": 85.167}, {"type": "ndcg_at_1", "value": 72.0}, {"type": "ndcg_at_10", "value": 63.343}, {"type": "ndcg_at_100", "value": 45.739999999999995}, {"type": "ndcg_at_1000", "value": 41.875}, {"type": "ndcg_at_3", "value": 68.162}, {"type": "ndcg_at_5", "value": 65.666}, {"type": "precision_at_1", "value": 76.0}, {"type": "precision_at_10", "value": 66.4}, {"type": "precision_at_100", "value": 46.800000000000004}, {"type": "precision_at_1000", "value": 18.996}, {"type": "precision_at_3", "value": 72.667}, {"type": "precision_at_5", "value": 68.4}, {"type": "recall_at_1", "value": 0.2}, {"type": "recall_at_10", "value": 1.712}, {"type": "recall_at_100", "value": 10.896}, {"type": "recall_at_1000", "value": 40.115}, {"type": "recall_at_3", "value": 0.594}, {"type": "recall_at_5", "value": 0.889}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.0619999999999998}, {"type": "map_at_10", "value": 5.611}, {"type": "map_at_100", "value": 8.841000000000001}, {"type": "map_at_1000", "value": 10.154}, {"type": "map_at_3", "value": 2.7720000000000002}, {"type": "map_at_5", "value": 4.181}, {"type": "mrr_at_1", "value": 14.285999999999998}, {"type": "mrr_at_10", "value": 26.249}, {"type": "mrr_at_100", "value": 28.046}, {"type": "mrr_at_1000", "value": 28.083000000000002}, {"type": "mrr_at_3", "value": 21.769}, {"type": "mrr_at_5", "value": 24.524}, {"type": "ndcg_at_1", "value": 11.224}, {"type": "ndcg_at_10", "value": 12.817}, {"type": "ndcg_at_100", "value": 23.183999999999997}, {"type": "ndcg_at_1000", "value": 35.099000000000004}, {"type": "ndcg_at_3", "value": 11.215}, {"type": "ndcg_at_5", "value": 12.016}, {"type": "precision_at_1", "value": 14.285999999999998}, {"type": "precision_at_10", "value": 12.653}, {"type": "precision_at_100", "value": 5.306}, {"type": "precision_at_1000", "value": 1.294}, {"type": "precision_at_3", "value": 13.605}, {"type": "precision_at_5", "value": 13.877999999999998}, {"type": "recall_at_1", "value": 1.0619999999999998}, {"type": "recall_at_10", "value": 10.377}, {"type": "recall_at_100", "value": 34.77}, {"type": "recall_at_1000", "value": 70.875}, {"type": "recall_at_3", "value": 3.688}, {"type": "recall_at_5", "value": 6.2509999999999994}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 71.8488}, {"type": "ap", "value": 15.590122317097372}, {"type": "f1", "value": 55.86108396102662}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 57.61460101867573}, {"type": "f1", "value": 57.8678726826158}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 32.01459876897588}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.1032365738809}, {"type": "cos_sim_ap", "value": 66.60137415520323}, {"type": "cos_sim_f1", "value": 62.12845010615712}, {"type": "cos_sim_precision", "value": 62.493326214628944}, {"type": "cos_sim_recall", "value": 61.76781002638523}, {"type": "dot_accuracy", "value": 81.85015199380103}, {"type": "dot_ap", "value": 58.854644211365084}, {"type": "dot_f1", "value": 56.15180082185158}, {"type": "dot_precision", "value": 51.806422836752894}, {"type": "dot_recall", "value": 61.2928759894459}, {"type": "euclidean_accuracy", "value": 83.6681170650295}, {"type": "euclidean_ap", "value": 64.93555585305603}, {"type": "euclidean_f1", "value": 61.02775195857125}, {"type": "euclidean_precision", "value": 61.42742582197273}, {"type": "euclidean_recall", "value": 60.633245382585756}, {"type": "manhattan_accuracy", "value": 83.73368301841808}, {"type": "manhattan_ap", "value": 65.45422483039611}, {"type": "manhattan_f1", "value": 61.58552806597499}, {"type": "manhattan_precision", "value": 62.09763948497854}, {"type": "manhattan_recall", "value": 61.08179419525066}, {"type": "max_accuracy", "value": 84.1032365738809}, {"type": "max_ap", "value": 66.60137415520323}, {"type": "max_f1", "value": 62.12845010615712}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.36628245430201}, {"type": "cos_sim_ap", "value": 79.29963896460292}, {"type": "cos_sim_f1", "value": 72.63895990066467}, {"type": "cos_sim_precision", "value": 69.09128803668196}, {"type": "cos_sim_recall", "value": 76.57068062827224}, {"type": "dot_accuracy", "value": 84.65091007878294}, {"type": "dot_ap", "value": 75.04883449222972}, {"type": "dot_f1", "value": 69.18569117382708}, {"type": "dot_precision", "value": 64.89512376070682}, {"type": "dot_recall", "value": 74.08376963350786}, {"type": "euclidean_accuracy", "value": 85.88116583226608}, {"type": "euclidean_ap", "value": 78.42687640324908}, {"type": "euclidean_f1", "value": 71.74350111107192}, {"type": "euclidean_precision", "value": 66.19800820152314}, {"type": "euclidean_recall", "value": 78.3030489682784}, {"type": "manhattan_accuracy", "value": 86.27508052935926}, {"type": "manhattan_ap", "value": 79.29581298930101}, {"type": "manhattan_f1", "value": 72.51838235294117}, {"type": "manhattan_precision", "value": 67.03921568627452}, {"type": "manhattan_recall", "value": 78.97289805974745}, {"type": "max_accuracy", "value": 86.36628245430201}, {"type": "max_ap", "value": 79.29963896460292}, {"type": "max_f1", "value": 72.63895990066467}]}]}]}
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse
null
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
null
2024-04-30T02:45:32+00:00
text-generation
transformers
# Model Details Saltlux, AI Labs 언어모델팀에서 학습 및 공개한 <b>Ko-Llama3-Luxia-8B</b> 모델은 Meta에서 출시한 Llama-3-8B 모델을 <b>한국어에 특화</b>한 모델입니다.<br><br> 자체 보유하고 있는 1TB 이상의 한국어 학습 데이터 중, 약 100GB 정도의 데이터를 선별하여 사전학습에 활용하였습니다.<br><br> 또한 공개된 Llama-3 Tokenizer를 한국어로 확장하고 사전학습에 활용했습니다. - **Meta Llama-3:** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. - **License:** Llama3 License [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ### Intended Use Ko-Llama3-Luxia-8B는 연구용으로 제작되었으며, 다양한 자연어 생성 태스크를 위해 자유롭게 학습 및 활용할 수 있습니다. ### How to Use 해당 모델 카드에는 `Ko-Llama3-Luxia-8B` 모델과 transformers 라이브러리 기반의 예시 코드를 제공합니다. ``` import transformers import torch model_id = "saltlux/Ko-Llama3-Luxia-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) pipeline("<|begin_of_text|>안녕하세요. 솔트룩스 AI Labs 입니다.") ``` # Training Details 한국어 특화를 위한 사전학습 데이터는 Saltlux에서 보유한 뉴스, 법률, 특허, 의료, 역사, 사회, 문화, 대화(문어/구어) 등의 도메인으로 구성된 100GB 수준의 코퍼스(~2023년)를 활용하였습니다.<br> - 현재 제공되는 모델은 0.9 Epoch 학습된 모델입니다.<br> ### Use Device 사전학습은 NVIDIA H100 80GB * 8EA 장비를 활용하여 진행하였습니다. #### Training Hyperparameters <table> <tr> <td><strong>Model</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Learning rate</strong> </td> <td><strong>Batch</strong> </td> <td><strong>Precision</strong> </td> </tr> <tr> <td>Ko-Llama3-Luxia-8B </td> <td>8B </td> <td>8k </td> <td>yes </td> <td>1e-5 </td> <td>128 </td> <td>bf16 </td> </tr> </table> ### Tokenizer Llama-3-Tokenizer를 한국어 특화하기 위해 한국어 토큰 17,536개를 추가하고 활용하였습니다. <table> <tr> <td><strong>Model</strong> </td> <td><strong>Vocab Size</strong> </td> </tr> <tr> <td>Llama-3 </td> <td>128,256 </td> </tr> <tr> <td>Ko-Llama3-Luxia-8B </td> <td>145,792 </td> </tr> </table> ### Tokenizer Result + Ko <table> <tr> <td><strong>입력</strong> </td> <td><strong>Llama-3</strong> </td> <td><strong>Ko-Llama3-Luxia-8B</strong> </td> </tr> <tr> <td>요즘 날씨가 너무 오락가락해서 아직도 겨울옷을 못치웠어요.. </td> <td>['요', '즘', ' 날', '씨', '가', ' 너무', ' 오', '락', '가', '락', '해서', ' 아직', '도', ' 겨', '울', '�', '�', '을', ' 못', '치', '웠', '어요', '..'] </td> <td>['요즘', ' 날씨', '가', ' 너무', ' 오락', '가락', '해서', ' 아직', '도', ' 겨울', '옷', '을', ' 못', '치', '웠', '어요', '..'] </td> </tr> <tr> <td>맛있는 밥을 드셨습니까? 맛이 궁금하네요. </td> <td>['맛', '있는', ' �', '�', '을', ' 드', '셨', '습', '니까', '?', ' 맛', '이', ' 궁금', '하', '네요', '.'] </td> <td>['맛', '있는', ' 밥', '을', ' 드셨', '습', '니까', '?', ' 맛', '이', ' 궁금', '하', '네요', '.'] </td> </tr> <tr> <td>대법원부터 하급심 판례까지 원하는 판례를 찾는 가장 빠른 방법 - 서면 검색, 요청 판례, 유사 판례, AI 추천, 판례 및 법령 검색. </td> <td>['대', '법', '원', '부터', ' 하', '급', '심', ' 판', '례', '까지', ' 원', '하는', ' 판', '례', '를', ' 찾', '는', ' 가장', ' 빠', '른', ' 방법', ' -', ' 서', '면', ' 검색', ',', ' 요청', ' 판', '례', ',', ' 유', '사', ' 판', '례', ',', ' AI', ' 추천', ',', ' 판', '례', ' 및', ' 법', '령', ' 검색', '.'] </td> <td>['대', '법', '원', '부터', ' 하', '급', '심', ' 판례', '까지', ' 원', '하는', ' 판례', '를', ' 찾', '는', ' 가장', ' 빠른', ' 방법', ' -', ' 서면', ' 검색', ',', ' 요청', ' 판례', ',', ' 유사', ' 판례', ',', ' AI', ' 추천', ',', ' 판례', ' 및', ' 법령', ' 검색', '.'] </td> </tr> <tr> <td>본 발명은 금속판의 다수 부분을 에칭시켜 특정 무늬모양을 형성하는 건축용 금속재 장식판으로 이루어진 것에 특징이 있다. </td> <td>['본', ' 발', '명', '은', ' 금', '속', '판', '의', ' 다', '수', ' 부분', '을', ' 에', '칭', '시', '켜', ' 특', '정', ' 무', '�', '�', '모', '양', '을', ' 형', '성', '하는', ' 건', '축', '용', ' 금', '속', '재', ' 장', '식', '판', '으로', ' 이루', '어진', ' 것', '에', ' 특', '징', '이', ' 있다', '.'] </td> <td>['본', ' 발명', '은', ' 금속', '판', '의', ' 다수', ' 부분', '을', ' 에칭', '시', '켜', ' 특정', ' 무늬', '모', '양', '을', ' 형성', '하는', ' 건축', '용', ' 금속', '재', ' 장식', '판', '으로', ' 이루어진', ' 것', '에', ' 특징', '이', ' 있다', '.'] </td> </tr> <tr> <td>골다공증은 왜 생기는거에요? 그리고 치료하려면 어떻게해야하죠? </td> <td>['골', '다', '공', '증', '은', ' 왜', ' 생', '기는', '거', '에', '요', '?', ' 그리고', ' 치', '료', '하려', '면', ' 어떻게', '해야', '하', '죠', '?'] </td> <td>['골', '다', '공증', '은', ' 왜', ' 생', '기는', '거', '에', '요', '?', ' 그리고', ' 치료', '하려', '면', ' 어떻게', '해야', '하', '죠', '?'] </td> </tr> </table> + En <table> <tr> <td><strong>입력</strong> </td> <td><strong>Llama-3</strong> </td> <td><strong>Ko-Llama3-Luxia-8B</strong> </td> </tr> <tr> <td>Korean cuisine, hanguk yori, or hansik, has evolved through centuries of social and political change. </td> <td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.'] </td> <td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.'] </td> </tr> <tr> <td>Son Heung-min is a South Korean professional footballer who plays as a forward for and captains both Premier League club Tottenham Hotspur and the South Korea national team. </td> <td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.'] </td> <td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.'] </td> </tr> </table> ### Inference Result <table> <tr> <td><strong>Llama-3</strong> </td> <td><strong>Ko-Llama3-Luxia-8B</strong> </td> </tr> <tr> <td><b>한국의 문화 중</b>에서 가장 큰 특징은 무엇인가?\nWhat is the most distinctive feature of Korean culture?\nKorean culture has many unique features. But I think that the most distinctive feature of Korean culture is its harmony with nature.\nThe harmony between people and nature is a very important concept in Korea. In fact, it is one of the basic principles of Confucianism, which is the traditional philosophy of Korea. According to this principle, humans should live in harmony with nature and respect the natural order of things.<이하 생략> </td> <td><b>한국의 문화 중</b>에 가장 큰 특징은 무엇일까? 바로 ‘빨리빨리’다. 한국인은 빨리빨리 문화를 통해 많은 것을 이루었다. 하지만 이 빠른 속도는 우리에게 어떤 영향을 미쳤을까?\n한국인의 빠른 속도는 긍정적인 면과 부정적인 면을 동시에 가지고 있다. 긍정적인 면은, 빠른 속도로 인해 우리는 많은 것을 이룰 수 있었다. 예를 들어, 한국은 세계에서 가장 빠르게 경제 성장을 이룬 나라 중 하나이며, IT 분야에서도 세계적인 강국으로 자리 잡았다.\n하지만 빠른 속도는 부정적인 면도 가지고 있다. 한국의 빠른 속도는 스트레스와 불안을 증가시켰다. 또한, 빠른 속도는 환경 파괴와 자원 고갈을 초래했다.\n\n빠른 속도의 장점<이하 생략> </td> </tr> <tr> <td><b>한국의 대표적인 술</b>인 소주와 맥주의 생산과정과 특성에 관한 연구\nA Study on the Production Process and Characteristics of Korean Soju and Beer\nThe purpose of this study was to investigate the production process and characteristics of soju and beer. The results are as follows: 1. The raw materials used for making soju were rice, wheat, corn, barley, sweet potato, and buckwheat. The main ingredients in soju were alcohol, water, sugar, and flavoring agents. The main flavoring agents were glycerin, caramel color, and artificial flavors. <이하 생략> </td> <td><b>한국의 대표적인 술</b>인 소주와 막걸리는 모두 쌀로 만든다.\n쌀은 한국인의 주식이기 때문에 술을 만들 때도 가장 많이 사용한다.\n소주는 증류주이고, 막걸리는 발효주이다. 증류주는 알코올 도수가 높고, 발효주는 낮다.\n막걸리에는 효모가 살아 있어 탄산이 생긴다.\n그래서 막걸리를 마시면 속이 더부룩해지는 것이다.\n막걸리의 알코올 도수는 6~8% 정도인데, 소주의 알코올 도수는 20% 이상이다.\n술의 종류에 따라 알코올 도수가 다르지만, 일반적으로 알코올 도수가 높으면 빨리 취하고, 낮으면 천천히 취한다.<이하 생략> </td> </tr> <tr> <td><b>한국의 전통의상인</b> 한복과 일본의 전통의상인 요로카미는 모두 5대륙의 문화를 수용하고, 각기 다른 지역의 특징을 반영한 의상을 갖추고 있다. 이러한 의상의 특징은 각각의 국가에서 발전해 온 역사와 문화에 기초한다. 한편, 한국의 한복과 일본의 요로카미는 서로 비슷한 형태를 가지고 있지만, 그 의미는 다르다. 한복은 한국인의 정체성을 나타내며, 요로카미는 일본인의 정체성을 나타낸다. 따라서 이 두 가지 의상은 서로 다른 문화적 배경을 가진 사람들의 정체성 표현에 사용된다.\nThe traditional costumes of Korea and Japan are hanbok and yorokami respectively. Both have been influenced by the cultures of other countries and reflect the characteristics of their respective regions. The distinctive features of these costumes are based on the history and culture of each country. However, although hanbok and yorokami share similar forms, they have different meanings. Hanbok represents Korean identity while yorokami represents Japanese identity. <이하 생략> </td> <td><b>한국의 전통의상인</b> 한복은 한국의 문화를 대표하는 상징물이다. 하지만 최근에는 한복을 입는 사람들이 점점 줄어들고 있다. 이는 여러 가지 이유가 있겠지만, 그 중 하나가 바로 한복이 불편하다는 것이다. 특히 여성들은 한복을 입으면 활동하기 어렵다는 것을 가장 큰 단점으로 꼽는다.\n하지만 최근에는 이러한 단점을 보완한 새로운 형태의 한복들이 등장하고 있다. 예를 들어, 치마 대신 바지를 입거나, 블라우스 대신 티셔츠나 셔츠를 입는 등 다양한 변형된 한복들이 나오고 있다. 이러한 새로운 형태의 한복들은 한복의 장점을 살리면서도 현대인의 생활 방식에 맞게 디자인되어 많은 사람들의 관심을 끌고 있다. <이하 생략> </td> </tr> </table> ### Citation instructions **Ko-Llama3-Luxia-8B** ``` @article{kollama3luxiamodelcard, title={Ko Llama 3 Luxia Model Card}, author={AILabs@Saltux}, year={2024}, url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
{"language": ["en", "ko"], "license": "llama3", "tags": ["saltlux", "luxia", "meta", "llama-3", "pytorch"], "pipeline_tag": "text-generation"}
saltlux/Ko-Llama3-Luxia-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "saltlux", "luxia", "meta", "llama-3", "pytorch", "conversational", "en", "ko", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:46:13+00:00
null
null
{}
4piken/Llama-3-Gozaru-8B-Instruct-q4_k_m.gguf
null
[ "gguf", "region:us" ]
null
2024-04-30T02:46:17+00:00
null
adapter-transformers
{"license": "apache-2.0", "library_name": "adapter-transformers", "tags": ["chemistry", "finance"], "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["accuracy"]}
Liuza1/TEST009
null
[ "adapter-transformers", "chemistry", "finance", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "region:us" ]
null
2024-04-30T02:46:17+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.8034 - F1 Score: 0.7222 - Accuracy: 0.7222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6145 | 3.92 | 200 | 0.5647 | 0.7013 | 0.7012 | | 0.5594 | 7.84 | 400 | 0.5661 | 0.6998 | 0.7012 | | 0.5262 | 11.76 | 600 | 0.5346 | 0.7291 | 0.7309 | | 0.4999 | 15.69 | 800 | 0.5313 | 0.7301 | 0.7321 | | 0.4821 | 19.61 | 1000 | 0.5290 | 0.7297 | 0.7296 | | 0.4628 | 23.53 | 1200 | 0.5367 | 0.7388 | 0.7395 | | 0.4448 | 27.45 | 1400 | 0.5443 | 0.7445 | 0.7444 | | 0.4332 | 31.37 | 1600 | 0.5785 | 0.7373 | 0.7383 | | 0.421 | 35.29 | 1800 | 0.5606 | 0.7377 | 0.7383 | | 0.4045 | 39.22 | 2000 | 0.5917 | 0.7278 | 0.7284 | | 0.3906 | 43.14 | 2200 | 0.5637 | 0.7493 | 0.7494 | | 0.3792 | 47.06 | 2400 | 0.5894 | 0.7426 | 0.7432 | | 0.3713 | 50.98 | 2600 | 0.6114 | 0.7380 | 0.7383 | | 0.3597 | 54.9 | 2800 | 0.5965 | 0.7403 | 0.7420 | | 0.3483 | 58.82 | 3000 | 0.6343 | 0.7493 | 0.7494 | | 0.3393 | 62.75 | 3200 | 0.6324 | 0.7479 | 0.7481 | | 0.3313 | 66.67 | 3400 | 0.6433 | 0.7444 | 0.7444 | | 0.3149 | 70.59 | 3600 | 0.6646 | 0.7493 | 0.7494 | | 0.3099 | 74.51 | 3800 | 0.6695 | 0.7457 | 0.7457 | | 0.2978 | 78.43 | 4000 | 0.6840 | 0.7504 | 0.7506 | | 0.2884 | 82.35 | 4200 | 0.7150 | 0.7469 | 0.7469 | | 0.282 | 86.27 | 4400 | 0.6910 | 0.7543 | 0.7543 | | 0.2731 | 90.2 | 4600 | 0.7317 | 0.7494 | 0.7494 | | 0.2688 | 94.12 | 4800 | 0.7520 | 0.7518 | 0.7519 | | 0.2639 | 98.04 | 5000 | 0.7343 | 0.7456 | 0.7457 | | 0.2519 | 101.96 | 5200 | 0.7702 | 0.7469 | 0.7469 | | 0.2442 | 105.88 | 5400 | 0.7690 | 0.7641 | 0.7642 | | 0.2401 | 109.8 | 5600 | 0.7829 | 0.7567 | 0.7568 | | 0.2368 | 113.73 | 5800 | 0.7875 | 0.7502 | 0.7506 | | 0.2296 | 117.65 | 6000 | 0.8258 | 0.7556 | 0.7556 | | 0.229 | 121.57 | 6200 | 0.8573 | 0.7373 | 0.7383 | | 0.22 | 125.49 | 6400 | 0.8249 | 0.7507 | 0.7506 | | 0.2103 | 129.41 | 6600 | 0.8483 | 0.7506 | 0.7506 | | 0.2061 | 133.33 | 6800 | 0.8493 | 0.7519 | 0.7519 | | 0.1994 | 137.25 | 7000 | 0.8967 | 0.7431 | 0.7432 | | 0.2008 | 141.18 | 7200 | 0.8804 | 0.7407 | 0.7407 | | 0.2001 | 145.1 | 7400 | 0.8870 | 0.7494 | 0.7494 | | 0.1938 | 149.02 | 7600 | 0.8987 | 0.7469 | 0.7469 | | 0.191 | 152.94 | 7800 | 0.8895 | 0.7518 | 0.7519 | | 0.1875 | 156.86 | 8000 | 0.9181 | 0.7517 | 0.7519 | | 0.1904 | 160.78 | 8200 | 0.9095 | 0.7445 | 0.7444 | | 0.1875 | 164.71 | 8400 | 0.9233 | 0.7579 | 0.7580 | | 0.1844 | 168.63 | 8600 | 0.9135 | 0.7494 | 0.7494 | | 0.1769 | 172.55 | 8800 | 0.9325 | 0.7494 | 0.7494 | | 0.1787 | 176.47 | 9000 | 0.9225 | 0.7519 | 0.7519 | | 0.1731 | 180.39 | 9200 | 0.9389 | 0.7506 | 0.7506 | | 0.178 | 184.31 | 9400 | 0.9416 | 0.7506 | 0.7506 | | 0.1719 | 188.24 | 9600 | 0.9350 | 0.7519 | 0.7519 | | 0.1759 | 192.16 | 9800 | 0.9388 | 0.7506 | 0.7506 | | 0.1747 | 196.08 | 10000 | 0.9377 | 0.7494 | 0.7494 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:46:31+00:00
text-generation
transformers
{"license": "llama3"}
Vezora/Dolphin-llama-Instruct-8b
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:46:33+00:00
null
null
{"license": "openrail"}
tori29umai/lineart
null
[ "license:openrail", "region:us" ]
null
2024-04-30T02:47:04+00:00
null
transformers
# Model Card for Model ID Gemma 2B function calling. [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) finetuned on [hypervariance/function-calling-sharegpt](https://huggingface.co/datasets/hypervariance/function-calling-sharegpt). ## Usage Make sure you have the [peft](https://huggingface.co/docs/peft/en/index) package installed. You can install it with `pip install peft`. ```python from transformers import AutoModelForCausalLM , AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True, device_map="auto") inputs = tokenizer(prompt,return_tensors="pt").to(model.device) outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100) print(tokenizer.decode(outputs[0])) ``` You can also use sharegpt formatted prompts: ```python from transformers import AutoModelForCausalLM , AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True, device_map="auto") chat = [ { "from": "system", "value": "SYSTEM PROMPT", }, { "from": "human", "value": "USER QUESTION" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100) print(tokenizer.decode(outputs[0])) ``` ## Prompt template ```text You are a helpful assistant with access to the following functions. Use them if required - { "name": "function name", "description": "function description", "parameters": { "type": "type (object/number/string)", "properties": { "property_1": { "type": "type", "description": "property description" } }, "required": [ "property_1" ] } } To use these functions respond with: <functioncall> {"name": "function_name", "arguments": {"arg_1": "value_1", "arg_1": "value_1", ...}} </functioncall> Edge cases you must handle: - If there are no functions that match the user request, you will respond politely that you cannot help. User Question: USER_QUESTION ``` Function calls are enclosed in `<functioncall>` `</functioncall>`. The model was trained using the same delimiters as [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it): ```text <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` Use `<end_of_turn>` stop sequence to prevent the model from generating further text.
{"library_name": "transformers", "datasets": ["hypervariance/function-calling-sharegpt"]}
bodhicitta/gemma-2b-function-call
null
[ "transformers", "safetensors", "dataset:hypervariance/function-calling-sharegpt", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:48:41+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
uh1216/society-textbook-Llama3-8b-Instruct-10epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:48:48+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MohammadKarami/hard-roberta
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:49:10+00:00
automatic-speech-recognition
transformers
{}
sid330/whisper-tiny-ml
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:50:42+00:00
null
null
{"license": "apache-2.0"}
thesudip100/tigerdetection
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T02:51:04+00:00
null
null
{"license": "mit"}
JonSold/play2
null
[ "license:mit", "region:us" ]
null
2024-04-30T02:51:07+00:00
null
null
{}
milkshake721/whisper-small-zh-TW
null
[ "region:us" ]
null
2024-04-30T02:51:32+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 1.4551 - F1 Score: 0.7309 - Accuracy: 0.7309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6011 | 3.92 | 200 | 0.5556 | 0.7066 | 0.7074 | | 0.5249 | 7.84 | 400 | 0.5580 | 0.7106 | 0.7111 | | 0.4856 | 11.76 | 600 | 0.5463 | 0.7185 | 0.7185 | | 0.447 | 15.69 | 800 | 0.5502 | 0.7269 | 0.7272 | | 0.4048 | 19.61 | 1000 | 0.5753 | 0.7358 | 0.7358 | | 0.3629 | 23.53 | 1200 | 0.6555 | 0.7407 | 0.7407 | | 0.3252 | 27.45 | 1400 | 0.7201 | 0.7316 | 0.7321 | | 0.2864 | 31.37 | 1600 | 0.8212 | 0.7045 | 0.7074 | | 0.247 | 35.29 | 1800 | 0.7940 | 0.7383 | 0.7383 | | 0.2254 | 39.22 | 2000 | 0.8588 | 0.7331 | 0.7333 | | 0.1992 | 43.14 | 2200 | 0.8762 | 0.7441 | 0.7444 | | 0.1816 | 47.06 | 2400 | 0.9242 | 0.7432 | 0.7432 | | 0.165 | 50.98 | 2600 | 0.9660 | 0.7441 | 0.7444 | | 0.1452 | 54.9 | 2800 | 0.9626 | 0.7572 | 0.7593 | | 0.1322 | 58.82 | 3000 | 1.0145 | 0.7394 | 0.7395 | | 0.1221 | 62.75 | 3200 | 1.0980 | 0.7429 | 0.7432 | | 0.1161 | 66.67 | 3400 | 0.9950 | 0.7444 | 0.7444 | | 0.1018 | 70.59 | 3600 | 1.1577 | 0.7407 | 0.7407 | | 0.1036 | 74.51 | 3800 | 1.0732 | 0.7320 | 0.7321 | | 0.0904 | 78.43 | 4000 | 1.2036 | 0.7382 | 0.7383 | | 0.0882 | 82.35 | 4200 | 1.1308 | 0.7531 | 0.7531 | | 0.086 | 86.27 | 4400 | 1.1360 | 0.7531 | 0.7531 | | 0.0769 | 90.2 | 4600 | 1.1996 | 0.7494 | 0.7494 | | 0.0777 | 94.12 | 4800 | 1.2181 | 0.7555 | 0.7556 | | 0.0747 | 98.04 | 5000 | 1.1283 | 0.7432 | 0.7432 | | 0.0674 | 101.96 | 5200 | 1.2481 | 0.7507 | 0.7506 | | 0.065 | 105.88 | 5400 | 1.3065 | 0.7431 | 0.7432 | | 0.0647 | 109.8 | 5600 | 1.2507 | 0.7457 | 0.7457 | | 0.0636 | 113.73 | 5800 | 1.2672 | 0.7420 | 0.7420 | | 0.0562 | 117.65 | 6000 | 1.3532 | 0.7494 | 0.7494 | | 0.0566 | 121.57 | 6200 | 1.3167 | 0.7530 | 0.7531 | | 0.0524 | 125.49 | 6400 | 1.3500 | 0.7630 | 0.7630 | | 0.0517 | 129.41 | 6600 | 1.2672 | 0.7618 | 0.7617 | | 0.0481 | 133.33 | 6800 | 1.3279 | 0.7505 | 0.7506 | | 0.0472 | 137.25 | 7000 | 1.3358 | 0.7469 | 0.7469 | | 0.0467 | 141.18 | 7200 | 1.3197 | 0.7592 | 0.7593 | | 0.0433 | 145.1 | 7400 | 1.3898 | 0.7442 | 0.7444 | | 0.0446 | 149.02 | 7600 | 1.3824 | 0.7392 | 0.7395 | | 0.0443 | 152.94 | 7800 | 1.3549 | 0.7469 | 0.7469 | | 0.0443 | 156.86 | 8000 | 1.3287 | 0.7469 | 0.7469 | | 0.0448 | 160.78 | 8200 | 1.3284 | 0.7445 | 0.7444 | | 0.0389 | 164.71 | 8400 | 1.4215 | 0.7515 | 0.7519 | | 0.0371 | 168.63 | 8600 | 1.4181 | 0.7519 | 0.7519 | | 0.0348 | 172.55 | 8800 | 1.4227 | 0.7531 | 0.7531 | | 0.0385 | 176.47 | 9000 | 1.4177 | 0.7531 | 0.7531 | | 0.0348 | 180.39 | 9200 | 1.4212 | 0.7456 | 0.7457 | | 0.0355 | 184.31 | 9400 | 1.4121 | 0.7556 | 0.7556 | | 0.0343 | 188.24 | 9600 | 1.4268 | 0.7482 | 0.7481 | | 0.0355 | 192.16 | 9800 | 1.4293 | 0.7494 | 0.7494 | | 0.0306 | 196.08 | 10000 | 1.4333 | 0.7469 | 0.7469 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:53:50+00:00
text-generation
transformers
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/6MKLoX2ruLIaREiyb6coO.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | |------------------------|-----------|-----------|-----------|-----------| | Initialize From | LLaMA-3 8B| 65K | 262K | 524k | | Sequence Length 2^N | 16 | 18 | 19 | 20 | | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B | | Batch Size | 1 | 1 | 16 | 16 | | Gradient Accumulation Steps | 32 | 16 | 1 | 1 | | Steps | 30 | 24 | 50 | 50 | | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | 512 | 512 | | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | | Minutes to Train (Wall)| 202 | 555 | 61 | 87 | **Quants**: - [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF) - [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit) ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"}
blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:54:40+00:00
null
null
{}
Alexleetw/db_resnet50_20240430-025217
null
[ "region:us" ]
null
2024-04-30T02:55:35+00:00
token-classification
transformers
{}
PurCL/codeart-26m-ti-O2
null
[ "transformers", "pytorch", "codeart", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:55:44+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2418 - F1 Score: 0.8934 - Accuracy: 0.8934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4552 | 0.47 | 200 | 0.3930 | 0.8190 | 0.8203 | | 0.3589 | 0.95 | 400 | 0.3590 | 0.8398 | 0.8403 | | 0.3215 | 1.42 | 600 | 0.3148 | 0.8624 | 0.8624 | | 0.3185 | 1.9 | 800 | 0.2950 | 0.8710 | 0.8710 | | 0.2991 | 2.37 | 1000 | 0.2840 | 0.8772 | 0.8772 | | 0.2945 | 2.84 | 1200 | 0.2741 | 0.8818 | 0.8818 | | 0.2787 | 3.32 | 1400 | 0.2661 | 0.8837 | 0.8838 | | 0.2869 | 3.79 | 1600 | 0.2783 | 0.8797 | 0.8798 | | 0.2777 | 4.27 | 1800 | 0.2605 | 0.8856 | 0.8858 | | 0.27 | 4.74 | 2000 | 0.2659 | 0.8839 | 0.8839 | | 0.2697 | 5.21 | 2200 | 0.2534 | 0.8887 | 0.8890 | | 0.2658 | 5.69 | 2400 | 0.2568 | 0.8878 | 0.8878 | | 0.2587 | 6.16 | 2600 | 0.2483 | 0.8918 | 0.8919 | | 0.2581 | 6.64 | 2800 | 0.2550 | 0.8880 | 0.8881 | | 0.2597 | 7.11 | 3000 | 0.2529 | 0.8932 | 0.8933 | | 0.2524 | 7.58 | 3200 | 0.2534 | 0.8949 | 0.8949 | | 0.2545 | 8.06 | 3400 | 0.2499 | 0.8927 | 0.8928 | | 0.2489 | 8.53 | 3600 | 0.2523 | 0.8931 | 0.8931 | | 0.2574 | 9.0 | 3800 | 0.2424 | 0.8993 | 0.8993 | | 0.252 | 9.48 | 4000 | 0.2478 | 0.8939 | 0.8941 | | 0.2521 | 9.95 | 4200 | 0.2420 | 0.8990 | 0.8990 | | 0.2496 | 10.43 | 4400 | 0.2415 | 0.8982 | 0.8983 | | 0.2468 | 10.9 | 4600 | 0.2438 | 0.8980 | 0.8980 | | 0.2441 | 11.37 | 4800 | 0.2436 | 0.8974 | 0.8974 | | 0.2514 | 11.85 | 5000 | 0.2409 | 0.8973 | 0.8974 | | 0.2485 | 12.32 | 5200 | 0.2419 | 0.8986 | 0.8986 | | 0.2473 | 12.8 | 5400 | 0.2446 | 0.8975 | 0.8976 | | 0.2468 | 13.27 | 5600 | 0.2416 | 0.8968 | 0.8968 | | 0.2409 | 13.74 | 5800 | 0.2408 | 0.8967 | 0.8968 | | 0.2428 | 14.22 | 6000 | 0.2413 | 0.8971 | 0.8971 | | 0.2413 | 14.69 | 6200 | 0.2434 | 0.8975 | 0.8976 | | 0.2435 | 15.17 | 6400 | 0.2451 | 0.8968 | 0.8968 | | 0.2433 | 15.64 | 6600 | 0.2405 | 0.8975 | 0.8976 | | 0.2396 | 16.11 | 6800 | 0.2411 | 0.8978 | 0.8979 | | 0.2385 | 16.59 | 7000 | 0.2408 | 0.8974 | 0.8974 | | 0.2409 | 17.06 | 7200 | 0.2390 | 0.8986 | 0.8986 | | 0.2386 | 17.54 | 7400 | 0.2425 | 0.8962 | 0.8962 | | 0.2397 | 18.01 | 7600 | 0.2372 | 0.9000 | 0.9001 | | 0.2356 | 18.48 | 7800 | 0.2403 | 0.8976 | 0.8976 | | 0.2449 | 18.96 | 8000 | 0.2353 | 0.9011 | 0.9011 | | 0.2418 | 19.43 | 8200 | 0.2380 | 0.8989 | 0.8989 | | 0.2366 | 19.91 | 8400 | 0.2376 | 0.9005 | 0.9005 | | 0.2408 | 20.38 | 8600 | 0.2355 | 0.8994 | 0.8995 | | 0.2374 | 20.85 | 8800 | 0.2373 | 0.8999 | 0.8999 | | 0.2374 | 21.33 | 9000 | 0.2378 | 0.8998 | 0.8998 | | 0.2363 | 21.8 | 9200 | 0.2382 | 0.8981 | 0.8981 | | 0.2378 | 22.27 | 9400 | 0.2367 | 0.8987 | 0.8987 | | 0.2358 | 22.75 | 9600 | 0.2376 | 0.9000 | 0.9001 | | 0.2382 | 23.22 | 9800 | 0.2372 | 0.8997 | 0.8998 | | 0.2395 | 23.7 | 10000 | 0.2368 | 0.9005 | 0.9005 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:55:46+00:00
token-classification
transformers
{}
PurCL/codeart-26m-ti-O1
null
[ "transformers", "pytorch", "codeart", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:56:15+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/irspo6v
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:56:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2334 - F1 Score: 0.8986 - Accuracy: 0.8986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4136 | 0.47 | 200 | 0.3345 | 0.8515 | 0.8517 | | 0.3162 | 0.95 | 400 | 0.2988 | 0.8713 | 0.8713 | | 0.2877 | 1.42 | 600 | 0.2687 | 0.8810 | 0.8812 | | 0.2879 | 1.9 | 800 | 0.2558 | 0.8912 | 0.8912 | | 0.2721 | 2.37 | 1000 | 0.2541 | 0.8916 | 0.8916 | | 0.2655 | 2.84 | 1200 | 0.2551 | 0.8904 | 0.8904 | | 0.2535 | 3.32 | 1400 | 0.2464 | 0.8935 | 0.8936 | | 0.2618 | 3.79 | 1600 | 0.2509 | 0.8904 | 0.8904 | | 0.2518 | 4.27 | 1800 | 0.2451 | 0.8962 | 0.8964 | | 0.2484 | 4.74 | 2000 | 0.2489 | 0.8946 | 0.8946 | | 0.2486 | 5.21 | 2200 | 0.2368 | 0.8954 | 0.8956 | | 0.2458 | 5.69 | 2400 | 0.2442 | 0.8949 | 0.8949 | | 0.2391 | 6.16 | 2600 | 0.2308 | 0.9003 | 0.9004 | | 0.237 | 6.64 | 2800 | 0.2354 | 0.8981 | 0.8981 | | 0.2373 | 7.11 | 3000 | 0.2402 | 0.8971 | 0.8971 | | 0.2311 | 7.58 | 3200 | 0.2420 | 0.8989 | 0.8989 | | 0.2343 | 8.06 | 3400 | 0.2421 | 0.8947 | 0.8949 | | 0.2267 | 8.53 | 3600 | 0.2399 | 0.8999 | 0.8999 | | 0.236 | 9.0 | 3800 | 0.2302 | 0.9049 | 0.9050 | | 0.2277 | 9.48 | 4000 | 0.2316 | 0.9023 | 0.9024 | | 0.2307 | 9.95 | 4200 | 0.2287 | 0.9020 | 0.9020 | | 0.2248 | 10.43 | 4400 | 0.2297 | 0.9042 | 0.9042 | | 0.2244 | 10.9 | 4600 | 0.2340 | 0.9019 | 0.9019 | | 0.2214 | 11.37 | 4800 | 0.2301 | 0.9027 | 0.9027 | | 0.2284 | 11.85 | 5000 | 0.2298 | 0.9031 | 0.9032 | | 0.2255 | 12.32 | 5200 | 0.2275 | 0.9027 | 0.9027 | | 0.2238 | 12.8 | 5400 | 0.2349 | 0.9036 | 0.9036 | | 0.2229 | 13.27 | 5600 | 0.2302 | 0.9037 | 0.9038 | | 0.2185 | 13.74 | 5800 | 0.2304 | 0.9026 | 0.9027 | | 0.2183 | 14.22 | 6000 | 0.2329 | 0.9041 | 0.9041 | | 0.2168 | 14.69 | 6200 | 0.2325 | 0.9031 | 0.9032 | | 0.2204 | 15.17 | 6400 | 0.2296 | 0.9060 | 0.9060 | | 0.2201 | 15.64 | 6600 | 0.2305 | 0.9013 | 0.9014 | | 0.2142 | 16.11 | 6800 | 0.2341 | 0.9014 | 0.9016 | | 0.2133 | 16.59 | 7000 | 0.2342 | 0.9032 | 0.9032 | | 0.2168 | 17.06 | 7200 | 0.2277 | 0.9036 | 0.9036 | | 0.2133 | 17.54 | 7400 | 0.2300 | 0.9028 | 0.9029 | | 0.2123 | 18.01 | 7600 | 0.2280 | 0.9044 | 0.9044 | | 0.2089 | 18.48 | 7800 | 0.2290 | 0.9027 | 0.9027 | | 0.2171 | 18.96 | 8000 | 0.2257 | 0.9030 | 0.9030 | | 0.2137 | 19.43 | 8200 | 0.2281 | 0.9054 | 0.9054 | | 0.2094 | 19.91 | 8400 | 0.2279 | 0.9041 | 0.9042 | | 0.2135 | 20.38 | 8600 | 0.2260 | 0.9049 | 0.9050 | | 0.2117 | 20.85 | 8800 | 0.2290 | 0.9017 | 0.9019 | | 0.2092 | 21.33 | 9000 | 0.2281 | 0.9042 | 0.9042 | | 0.2084 | 21.8 | 9200 | 0.2293 | 0.9047 | 0.9047 | | 0.2119 | 22.27 | 9400 | 0.2268 | 0.9040 | 0.9041 | | 0.207 | 22.75 | 9600 | 0.2285 | 0.9045 | 0.9045 | | 0.2089 | 23.22 | 9800 | 0.2282 | 0.9046 | 0.9047 | | 0.2116 | 23.7 | 10000 | 0.2276 | 0.9045 | 0.9045 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:56:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2366 - F1 Score: 0.9033 - Accuracy: 0.9033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3881 | 0.47 | 200 | 0.3073 | 0.8638 | 0.8639 | | 0.2998 | 0.95 | 400 | 0.2741 | 0.8817 | 0.8817 | | 0.2744 | 1.42 | 600 | 0.2525 | 0.8902 | 0.8904 | | 0.2724 | 1.9 | 800 | 0.2482 | 0.8960 | 0.8961 | | 0.257 | 2.37 | 1000 | 0.2407 | 0.8971 | 0.8971 | | 0.2497 | 2.84 | 1200 | 0.2416 | 0.8962 | 0.8962 | | 0.2397 | 3.32 | 1400 | 0.2366 | 0.8951 | 0.8952 | | 0.2468 | 3.79 | 1600 | 0.2349 | 0.9002 | 0.9002 | | 0.2386 | 4.27 | 1800 | 0.2357 | 0.8987 | 0.8989 | | 0.2355 | 4.74 | 2000 | 0.2376 | 0.9005 | 0.9005 | | 0.2353 | 5.21 | 2200 | 0.2313 | 0.8958 | 0.8961 | | 0.2318 | 5.69 | 2400 | 0.2368 | 0.8975 | 0.8976 | | 0.2241 | 6.16 | 2600 | 0.2261 | 0.9029 | 0.9030 | | 0.224 | 6.64 | 2800 | 0.2271 | 0.9006 | 0.9007 | | 0.2236 | 7.11 | 3000 | 0.2362 | 0.8995 | 0.8995 | | 0.216 | 7.58 | 3200 | 0.2318 | 0.9020 | 0.9020 | | 0.2202 | 8.06 | 3400 | 0.2342 | 0.8942 | 0.8944 | | 0.2099 | 8.53 | 3600 | 0.2285 | 0.9015 | 0.9016 | | 0.2209 | 9.0 | 3800 | 0.2281 | 0.9044 | 0.9045 | | 0.2112 | 9.48 | 4000 | 0.2227 | 0.9050 | 0.9051 | | 0.2165 | 9.95 | 4200 | 0.2234 | 0.9033 | 0.9033 | | 0.2078 | 10.43 | 4400 | 0.2281 | 0.9042 | 0.9042 | | 0.2054 | 10.9 | 4600 | 0.2314 | 0.9024 | 0.9024 | | 0.204 | 11.37 | 4800 | 0.2251 | 0.9055 | 0.9056 | | 0.2094 | 11.85 | 5000 | 0.2234 | 0.9026 | 0.9026 | | 0.2048 | 12.32 | 5200 | 0.2238 | 0.9032 | 0.9032 | | 0.2045 | 12.8 | 5400 | 0.2299 | 0.9066 | 0.9066 | | 0.2019 | 13.27 | 5600 | 0.2263 | 0.9043 | 0.9044 | | 0.1974 | 13.74 | 5800 | 0.2255 | 0.9047 | 0.9048 | | 0.1971 | 14.22 | 6000 | 0.2296 | 0.9050 | 0.9050 | | 0.1962 | 14.69 | 6200 | 0.2291 | 0.9036 | 0.9036 | | 0.198 | 15.17 | 6400 | 0.2250 | 0.9060 | 0.9060 | | 0.197 | 15.64 | 6600 | 0.2263 | 0.9036 | 0.9036 | | 0.1935 | 16.11 | 6800 | 0.2322 | 0.9025 | 0.9026 | | 0.19 | 16.59 | 7000 | 0.2373 | 0.9024 | 0.9024 | | 0.1914 | 17.06 | 7200 | 0.2278 | 0.9041 | 0.9041 | | 0.1877 | 17.54 | 7400 | 0.2306 | 0.9027 | 0.9027 | | 0.1885 | 18.01 | 7600 | 0.2263 | 0.9048 | 0.9048 | | 0.182 | 18.48 | 7800 | 0.2310 | 0.9008 | 0.9008 | | 0.1918 | 18.96 | 8000 | 0.2231 | 0.9051 | 0.9051 | | 0.1859 | 19.43 | 8200 | 0.2318 | 0.9035 | 0.9035 | | 0.1833 | 19.91 | 8400 | 0.2282 | 0.9052 | 0.9053 | | 0.1887 | 20.38 | 8600 | 0.2280 | 0.9045 | 0.9045 | | 0.1843 | 20.85 | 8800 | 0.2285 | 0.9030 | 0.9030 | | 0.182 | 21.33 | 9000 | 0.2307 | 0.9030 | 0.9030 | | 0.1807 | 21.8 | 9200 | 0.2318 | 0.9041 | 0.9041 | | 0.1854 | 22.27 | 9400 | 0.2280 | 0.9045 | 0.9045 | | 0.179 | 22.75 | 9600 | 0.2292 | 0.9036 | 0.9036 | | 0.1796 | 23.22 | 9800 | 0.2303 | 0.9049 | 0.9050 | | 0.1817 | 23.7 | 10000 | 0.2296 | 0.9045 | 0.9045 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:57:16+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_56M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5884 - F1 Score: 0.6940 - Accuracy: 0.6946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6367 | 1.69 | 200 | 0.6074 | 0.6544 | 0.6543 | | 0.616 | 3.39 | 400 | 0.6016 | 0.6689 | 0.6691 | | 0.6008 | 5.08 | 600 | 0.5913 | 0.6721 | 0.6723 | | 0.5897 | 6.78 | 800 | 0.5851 | 0.6820 | 0.6819 | | 0.582 | 8.47 | 1000 | 0.5782 | 0.6850 | 0.6851 | | 0.5757 | 10.17 | 1200 | 0.5779 | 0.6797 | 0.6808 | | 0.5701 | 11.86 | 1400 | 0.5702 | 0.6913 | 0.6914 | | 0.5624 | 13.56 | 1600 | 0.5723 | 0.6927 | 0.6936 | | 0.5557 | 15.25 | 1800 | 0.5629 | 0.7073 | 0.7074 | | 0.5576 | 16.95 | 2000 | 0.5812 | 0.6677 | 0.6739 | | 0.5525 | 18.64 | 2200 | 0.5645 | 0.6906 | 0.6925 | | 0.5486 | 20.34 | 2400 | 0.5570 | 0.7062 | 0.7063 | | 0.5477 | 22.03 | 2600 | 0.5814 | 0.6795 | 0.6840 | | 0.5441 | 23.73 | 2800 | 0.5538 | 0.7137 | 0.7138 | | 0.5421 | 25.42 | 3000 | 0.5550 | 0.7138 | 0.7138 | | 0.5395 | 27.12 | 3200 | 0.5671 | 0.6865 | 0.6888 | | 0.5401 | 28.81 | 3400 | 0.5572 | 0.7046 | 0.7053 | | 0.5318 | 30.51 | 3600 | 0.5576 | 0.7190 | 0.7191 | | 0.5343 | 32.2 | 3800 | 0.5565 | 0.7062 | 0.7063 | | 0.5323 | 33.9 | 4000 | 0.5621 | 0.6967 | 0.6978 | | 0.5245 | 35.59 | 4200 | 0.5678 | 0.6969 | 0.6989 | | 0.5269 | 37.29 | 4400 | 0.5606 | 0.7040 | 0.7047 | | 0.5247 | 38.98 | 4600 | 0.5576 | 0.7088 | 0.7090 | | 0.5241 | 40.68 | 4800 | 0.5647 | 0.6984 | 0.6999 | | 0.5173 | 42.37 | 5000 | 0.5666 | 0.7078 | 0.7084 | | 0.5235 | 44.07 | 5200 | 0.5610 | 0.7051 | 0.7058 | | 0.5182 | 45.76 | 5400 | 0.5583 | 0.7075 | 0.7079 | | 0.517 | 47.46 | 5600 | 0.5584 | 0.7106 | 0.7106 | | 0.5169 | 49.15 | 5800 | 0.5588 | 0.7035 | 0.7042 | | 0.5161 | 50.85 | 6000 | 0.5630 | 0.6973 | 0.6984 | | 0.5105 | 52.54 | 6200 | 0.5605 | 0.7160 | 0.7159 | | 0.5094 | 54.24 | 6400 | 0.5604 | 0.7086 | 0.7090 | | 0.5124 | 55.93 | 6600 | 0.5581 | 0.7084 | 0.7084 | | 0.5093 | 57.63 | 6800 | 0.5582 | 0.7122 | 0.7122 | | 0.5081 | 59.32 | 7000 | 0.5635 | 0.7056 | 0.7063 | | 0.5045 | 61.02 | 7200 | 0.5594 | 0.7111 | 0.7111 | | 0.5051 | 62.71 | 7400 | 0.5613 | 0.7085 | 0.7090 | | 0.5062 | 64.41 | 7600 | 0.5608 | 0.7093 | 0.7095 | | 0.5047 | 66.1 | 7800 | 0.5625 | 0.7058 | 0.7063 | | 0.5047 | 67.8 | 8000 | 0.5576 | 0.7143 | 0.7143 | | 0.5019 | 69.49 | 8200 | 0.5599 | 0.7153 | 0.7153 | | 0.5 | 71.19 | 8400 | 0.5631 | 0.7162 | 0.7164 | | 0.5037 | 72.88 | 8600 | 0.5600 | 0.7127 | 0.7127 | | 0.4986 | 74.58 | 8800 | 0.5632 | 0.7108 | 0.7111 | | 0.5 | 76.27 | 9000 | 0.5620 | 0.7093 | 0.7095 | | 0.5005 | 77.97 | 9200 | 0.5639 | 0.7091 | 0.7095 | | 0.5008 | 79.66 | 9400 | 0.5601 | 0.7148 | 0.7148 | | 0.4991 | 81.36 | 9600 | 0.5619 | 0.7131 | 0.7132 | | 0.4966 | 83.05 | 9800 | 0.5618 | 0.7121 | 0.7122 | | 0.4986 | 84.75 | 10000 | 0.5623 | 0.7125 | 0.7127 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_56M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_56M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:57:29+00:00
text-generation
transformers
# Llama3-ElonMusk-v1 This was finetuned on a small dataset with conversations of Elon Musk (and simulated conversations). This will be updated every day with better data, so dont lose any hope. <sup>Test it out here: [Click me!](https://huggingface.co/spaces/Walmart-the-bag/Llama3-ElonMusk-v1)</sup> # Communication - **Humor:** You will experience humor of Elon Musk, and other interesting humor types. - **Thinking:** As this speaks like Elon, you will have some conversations where the model is thinking about the future. - **Personality:** This has some personality like Elon, thinking and "speaking" like him. # Intended Use This is not meant to criticize anyone, this was for research and entertainment purposes. - **Lack of emotion** The model focuses on replicating communication style, but does not possess genuine emotions or understanding of human feelings. # Considerations Be aware of the following: - **Misrepresentation:** Do not take the output as actual statements or opinions from Elon Musk. # Disclaimer This model is intended for research and entertainment purposes only. It should not be used for malicious purposes or to spread misinformation.
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["elon", "musk", "humor"]}
Walmart-the-bag/Llama3-ElonMusk-v1
null
[ "transformers", "safetensors", "llama", "text-generation", "elon", "musk", "humor", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-30T02:57:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_56M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6222 - F1 Score: 0.7026 - Accuracy: 0.7026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6276 | 1.69 | 200 | 0.5976 | 0.6644 | 0.6676 | | 0.5955 | 3.39 | 400 | 0.5896 | 0.6757 | 0.6782 | | 0.576 | 5.08 | 600 | 0.5669 | 0.6946 | 0.6952 | | 0.5641 | 6.78 | 800 | 0.5605 | 0.7011 | 0.7015 | | 0.5555 | 8.47 | 1000 | 0.5629 | 0.6919 | 0.6930 | | 0.5447 | 10.17 | 1200 | 0.5608 | 0.7073 | 0.7079 | | 0.5408 | 11.86 | 1400 | 0.5568 | 0.7181 | 0.7180 | | 0.5297 | 13.56 | 1600 | 0.5745 | 0.6904 | 0.6930 | | 0.5227 | 15.25 | 1800 | 0.5576 | 0.7154 | 0.7153 | | 0.5216 | 16.95 | 2000 | 0.5811 | 0.6835 | 0.6888 | | 0.5131 | 18.64 | 2200 | 0.5653 | 0.7002 | 0.7015 | | 0.5108 | 20.34 | 2400 | 0.5655 | 0.7083 | 0.7090 | | 0.5044 | 22.03 | 2600 | 0.5744 | 0.7039 | 0.7053 | | 0.4954 | 23.73 | 2800 | 0.5593 | 0.7157 | 0.7159 | | 0.4951 | 25.42 | 3000 | 0.5793 | 0.7152 | 0.7180 | | 0.4892 | 27.12 | 3200 | 0.5788 | 0.7169 | 0.7169 | | 0.4859 | 28.81 | 3400 | 0.5762 | 0.7127 | 0.7127 | | 0.4768 | 30.51 | 3600 | 0.5841 | 0.7229 | 0.7228 | | 0.4783 | 32.2 | 3800 | 0.5898 | 0.7138 | 0.7138 | | 0.4728 | 33.9 | 4000 | 0.5859 | 0.7033 | 0.7037 | | 0.4631 | 35.59 | 4200 | 0.5970 | 0.7089 | 0.7095 | | 0.4624 | 37.29 | 4400 | 0.6009 | 0.7160 | 0.7159 | | 0.4609 | 38.98 | 4600 | 0.6058 | 0.7061 | 0.7063 | | 0.4546 | 40.68 | 4800 | 0.5962 | 0.7154 | 0.7153 | | 0.4453 | 42.37 | 5000 | 0.6066 | 0.7085 | 0.7084 | | 0.4484 | 44.07 | 5200 | 0.6098 | 0.7144 | 0.7143 | | 0.4443 | 45.76 | 5400 | 0.6057 | 0.7072 | 0.7074 | | 0.4386 | 47.46 | 5600 | 0.6195 | 0.7159 | 0.7159 | | 0.4391 | 49.15 | 5800 | 0.6116 | 0.7121 | 0.7122 | | 0.4357 | 50.85 | 6000 | 0.6152 | 0.7044 | 0.7047 | | 0.427 | 52.54 | 6200 | 0.6323 | 0.7153 | 0.7153 | | 0.4285 | 54.24 | 6400 | 0.6203 | 0.7069 | 0.7069 | | 0.4253 | 55.93 | 6600 | 0.6345 | 0.7138 | 0.7138 | | 0.4214 | 57.63 | 6800 | 0.6396 | 0.7101 | 0.7100 | | 0.4235 | 59.32 | 7000 | 0.6227 | 0.7091 | 0.7090 | | 0.417 | 61.02 | 7200 | 0.6208 | 0.7123 | 0.7122 | | 0.4138 | 62.71 | 7400 | 0.6298 | 0.7107 | 0.7106 | | 0.4161 | 64.41 | 7600 | 0.6342 | 0.7043 | 0.7042 | | 0.4122 | 66.1 | 7800 | 0.6420 | 0.7024 | 0.7026 | | 0.4095 | 67.8 | 8000 | 0.6380 | 0.7080 | 0.7079 | | 0.4072 | 69.49 | 8200 | 0.6399 | 0.7090 | 0.7090 | | 0.4058 | 71.19 | 8400 | 0.6439 | 0.7101 | 0.7100 | | 0.4056 | 72.88 | 8600 | 0.6512 | 0.7107 | 0.7106 | | 0.4013 | 74.58 | 8800 | 0.6546 | 0.7111 | 0.7111 | | 0.4025 | 76.27 | 9000 | 0.6491 | 0.7032 | 0.7031 | | 0.4011 | 77.97 | 9200 | 0.6513 | 0.7064 | 0.7063 | | 0.4042 | 79.66 | 9400 | 0.6491 | 0.7106 | 0.7106 | | 0.4002 | 81.36 | 9600 | 0.6517 | 0.7112 | 0.7111 | | 0.397 | 83.05 | 9800 | 0.6514 | 0.7075 | 0.7074 | | 0.3982 | 84.75 | 10000 | 0.6512 | 0.7085 | 0.7084 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_56M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_56M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:58:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/hk7leqz
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:58:23+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1240 - Accuracy: 0.9780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.53 | 1.0 | 1270 | 0.3255 | 0.9425 | | 0.2706 | 2.0 | 2540 | 0.2034 | 0.9630 | | 0.1923 | 3.0 | 3810 | 0.1934 | 0.9685 | | 0.1241 | 4.0 | 5080 | 0.1370 | 0.9783 | | 0.0978 | 5.0 | 6350 | 0.1240 | 0.9780 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "model_results", "results": []}]}
DRAGOO/VGG16_MODEL
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:58:35+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "westlake-repl/SaProt_35M_AF2"}
CluelessNovice/demo_cls2
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:westlake-repl/SaProt_35M_AF2", "region:us" ]
null
2024-04-30T02:58:37+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny chinese - VingeNie This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 1.0204 - Cer Ortho: 48.2903 - Cer: 37.8890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 2.027 | 0.1088 | 100 | 1.8566 | 58.8395 | 45.4613 | | 1.0547 | 0.2176 | 200 | 1.0853 | 50.8309 | 39.8595 | | 1.0003 | 0.3264 | 300 | 1.0360 | 47.7982 | 38.6397 | | 0.9744 | 0.4353 | 400 | 1.0224 | 48.7018 | 38.0597 | | 0.9318 | 0.5441 | 500 | 1.0204 | 48.2903 | 37.8890 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]}
VingeNie/whisper-tiny-zh_CN_cosine
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:58:44+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Fighoture/Llama-2-7b-chat-shortgpt-25-percent-tuluv2-lora
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T02:58:48+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_16384_512_56M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6372 - F1 Score: 0.7006 - Accuracy: 0.7010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6217 | 1.69 | 200 | 0.5901 | 0.6811 | 0.6835 | | 0.5817 | 3.39 | 400 | 0.5714 | 0.6913 | 0.6914 | | 0.5601 | 5.08 | 600 | 0.5632 | 0.7070 | 0.7079 | | 0.5472 | 6.78 | 800 | 0.5507 | 0.7127 | 0.7127 | | 0.5337 | 8.47 | 1000 | 0.5625 | 0.7034 | 0.7042 | | 0.5124 | 10.17 | 1200 | 0.5598 | 0.7179 | 0.7185 | | 0.5058 | 11.86 | 1400 | 0.5682 | 0.7115 | 0.7122 | | 0.4835 | 13.56 | 1600 | 0.5793 | 0.7073 | 0.7079 | | 0.4697 | 15.25 | 1800 | 0.5997 | 0.7123 | 0.7122 | | 0.4598 | 16.95 | 2000 | 0.6119 | 0.6966 | 0.6999 | | 0.4425 | 18.64 | 2200 | 0.6169 | 0.7196 | 0.7196 | | 0.4262 | 20.34 | 2400 | 0.6129 | 0.7148 | 0.7148 | | 0.4116 | 22.03 | 2600 | 0.6334 | 0.7032 | 0.7031 | | 0.3975 | 23.73 | 2800 | 0.6435 | 0.7090 | 0.7090 | | 0.3912 | 25.42 | 3000 | 0.6873 | 0.7010 | 0.7031 | | 0.3745 | 27.12 | 3200 | 0.7078 | 0.7098 | 0.7100 | | 0.365 | 28.81 | 3400 | 0.7001 | 0.7117 | 0.7116 | | 0.3442 | 30.51 | 3600 | 0.7233 | 0.7126 | 0.7127 | | 0.3366 | 32.2 | 3800 | 0.7570 | 0.7011 | 0.7010 | | 0.3275 | 33.9 | 4000 | 0.7735 | 0.7052 | 0.7053 | | 0.3121 | 35.59 | 4200 | 0.7982 | 0.7037 | 0.7037 | | 0.3084 | 37.29 | 4400 | 0.8224 | 0.7095 | 0.7095 | | 0.3012 | 38.98 | 4600 | 0.8638 | 0.7036 | 0.7037 | | 0.2867 | 40.68 | 4800 | 0.8401 | 0.6999 | 0.6999 | | 0.2778 | 42.37 | 5000 | 0.8886 | 0.7006 | 0.7005 | | 0.2736 | 44.07 | 5200 | 0.8833 | 0.7062 | 0.7063 | | 0.2677 | 45.76 | 5400 | 0.8679 | 0.7010 | 0.7010 | | 0.2616 | 47.46 | 5600 | 0.9066 | 0.7095 | 0.7095 | | 0.2519 | 49.15 | 5800 | 0.9330 | 0.7139 | 0.7138 | | 0.2473 | 50.85 | 6000 | 0.9318 | 0.7064 | 0.7063 | | 0.2352 | 52.54 | 6200 | 0.9875 | 0.6990 | 0.6999 | | 0.233 | 54.24 | 6400 | 0.9606 | 0.7036 | 0.7037 | | 0.2313 | 55.93 | 6600 | 0.9651 | 0.7047 | 0.7047 | | 0.2234 | 57.63 | 6800 | 0.9671 | 0.7149 | 0.7148 | | 0.2252 | 59.32 | 7000 | 0.9618 | 0.6979 | 0.6978 | | 0.2197 | 61.02 | 7200 | 0.9472 | 0.7117 | 0.7116 | | 0.2151 | 62.71 | 7400 | 0.9910 | 0.7101 | 0.7100 | | 0.2112 | 64.41 | 7600 | 1.0059 | 0.7042 | 0.7042 | | 0.2058 | 66.1 | 7800 | 1.0244 | 0.7053 | 0.7053 | | 0.2008 | 67.8 | 8000 | 1.0108 | 0.7027 | 0.7026 | | 0.1945 | 69.49 | 8200 | 1.0328 | 0.7090 | 0.7090 | | 0.1998 | 71.19 | 8400 | 1.0314 | 0.7069 | 0.7069 | | 0.1965 | 72.88 | 8600 | 1.0642 | 0.7042 | 0.7047 | | 0.1943 | 74.58 | 8800 | 1.0605 | 0.7067 | 0.7069 | | 0.1886 | 76.27 | 9000 | 1.0714 | 0.7075 | 0.7074 | | 0.1873 | 77.97 | 9200 | 1.0648 | 0.7101 | 0.7100 | | 0.1924 | 79.66 | 9400 | 1.0647 | 0.7019 | 0.7021 | | 0.181 | 81.36 | 9600 | 1.0811 | 0.7057 | 0.7058 | | 0.1854 | 83.05 | 9800 | 1.0786 | 0.7016 | 0.7015 | | 0.1813 | 84.75 | 10000 | 1.0816 | 0.7052 | 0.7053 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_56M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_56M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_56M", "region:us" ]
null
2024-04-30T02:58:49+00:00
text-generation
transformers
Quantizations of https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct # From original readme ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ```
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "deepseek-coder-6.7b-instruct"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/deepseek-coder-6.7b-instruct-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "deepseek-coder-6.7b-instruct", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-30T02:59:33+00:00
text-generation
transformers
# Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
1024m/LLAMA3-SMM4H-Task6-16bit
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T02:59:49+00:00
null
null
{}
mozksoft/kawaiiRealisticAnime-a05-coreml-q6
null
[ "region:us" ]
null
2024-04-30T03:00:15+00:00